Skip to content

Vision updated#22

Open
SAKETH11111 wants to merge 7 commits intomainfrom
Vision_updated
Open

Vision updated#22
SAKETH11111 wants to merge 7 commits intomainfrom
Vision_updated

Conversation

@SAKETH11111
Copy link
Contributor

  • Added a new Vision subsystem to handle vision processing and integration with the drive system.

  • Updated Constants.java to include vision-related constants

  • Modified the Drive class to configure state estimation with standard deviations.

  • Updated RobotContainer to instantiate and integrate the new Vision subsystem

Comment on lines +109 to +110
var stateStdDevs = VecBuilder.fill(0.05, 0.05, Units.degreesToRadians(5)); // x, y, heading
var visionStdDevs = VecBuilder.fill(0.5, 0.5, Units.degreesToRadians(30)); // Higher uncertainty for vision
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do these numbers mean? Where did you find them?

Copy link
Contributor Author

@SAKETH11111 SAKETH11111 Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stateStdDevs is base odometry uncertainty

  • 0.05 meters (5cm) for X and Y position
  • 5 degrees for heading These are relatively small because wheel odometry is fairly precise over short distances

visionStdDevs, default vision uncertainty

  • 0.5 meters (50cm) for X and Y position
  • 30 degrees for heading. These are larger because vision measurements tend to be less precise than odometry if we don't see a lot of Apriltags

These values are used by the SwerveDrivePoseEstimator as default "trust levels," lower numbers mean we trust that measurement more. We trust odometry more than vision by default, but then adjust the vision uncertainty based on:

Number of visible tags (more tags = more trust)
Distance to tags (further = less trust)

From: https://docs.wpilib.org/en/stable/docs/software/advanced-controls/state-space/state-space-pose-estimators.html

Comment on lines 310 to 313
// Scale vision measurement standard deviations based on distance from robot to
// tags
// and number of tags visible - this is handled by the Vision subsystem passing
// appropriate stdDevs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this logic is handled by the vision subsystem, this comment should be located in the vision subsystem and not here

Comment on lines +120 to +124
public static final double VISION_STD_DEV_BASE_XY = 0.5; // meters
public static final double VISION_STD_DEV_BASE_THETA = Units.degreesToRadians(30);
public static final double VISION_STD_DEV_MULTI_XY = 0.1;
public static final double VISION_STD_DEV_MULTI_THETA = Units.degreesToRadians(10);
public static final double VISION_DISTANCE_SCALE = 0.2; // How much to increase uncertainty per meter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do these numbers mean? I'm assuming they'll need to be updated for our robot? How do we find/calculate them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I know, to tune these, start with these default values and then:

If the robot seems too jumpy: increase the numbers (trust vision less)
If the robot seems slow to correct: decrease the numbers (trust vision more)

case SIM:
// Sim robot, instantiate physics sim IO implementations
_drive = new Drive(new GyroIO() {}, new ModuleIOSim(), new ModuleIOSim(), new ModuleIOSim(), new ModuleIOSim());
_vision = new Vision(_drive, new VisionIO() {}); // Empty implementation for sim
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking about this one off and on. PhotonLib works entirely based on NetworkTables, so it should be possible to launch a local version of PhotonVision on a laptop when running the simulator and have it connect to the virtual robot. I think we can use the VisionIOPhotonLib class here:

Suggested change
_vision = new Vision(_drive, new VisionIO() {}); // Empty implementation for sim
_vision = new Vision(_drive, new VisionIOPhotonLib(_drive));

// Subsystems
private final Drive _drive;
@SuppressWarnings("unused")
private final Vision _vision; // Add this field
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does your comment on this line mean?

Comment on lines 26 to 35
private final PhotonCamera _camera;
private double _captureTimesStamp = 0.0;
private Pose2d _pose = new Pose2d();
private boolean _hasPose = false;
private double[] _distances = new double[] {};
private int[] _targetIds = new int[] {};
private double[] _targetYaws = new double[] {};
private int _numProcessedTargets;
private AprilTagFieldLayout fieldLayout;
private PhotonPoseEstimator _poseEstimator;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The field layout and pose estimator can both be final. Also, fieldLayout doesn't follow the naming convention we use for member variables:

Suggested change
private final PhotonCamera _camera;
private double _captureTimesStamp = 0.0;
private Pose2d _pose = new Pose2d();
private boolean _hasPose = false;
private double[] _distances = new double[] {};
private int[] _targetIds = new int[] {};
private double[] _targetYaws = new double[] {};
private int _numProcessedTargets;
private AprilTagFieldLayout fieldLayout;
private PhotonPoseEstimator _poseEstimator;
private final PhotonCamera _camera;
private final AprilTagFieldLayout _fieldLayout;
private final PhotonPoseEstimator _poseEstimator;
private double _captureTimesStamp = 0.0;
private Pose2d _pose = new Pose2d();
private boolean _hasPose = false;
private double[] _distances = new double[] {};
private int[] _targetIds = new int[] {};
private double[] _targetYaws = new double[] {};
private int _numProcessedTargets;

Comment on lines 43 to 44
// Load the 2024 field layout
fieldLayout = AprilTagFieldLayout.loadFromResource(AprilTagFields.k2024Crescendo.m_resourceFile);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to load the 2025 layout for REEFSCAPE instead

Comment on lines +62 to +63
// get the latest result (last in the list)
var result = results.get(results.size() - 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming this gets whatever the latest result is in case there are multiple results available? We could probably think about changing this to use the multithreaded "multi-input" strategy the drivebase uses for odometry with its queues. That might work here as well and could possibly provide more vision data to the odometer.

});
}

public Optional<EstimatedRobotPose> getEstimatedGlobalPose(Pose2d prevEstimatedRobotPose, PhotonPipelineResult result)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is only visible in the VisionIOPhotonLib class, and although we create an instance of this in RobotContainer, we'll refer to it as a VisionIO object at that point meaning we won't have visibility to this method. Because of this, I'd like this method to be private.

Suggested change
public Optional<EstimatedRobotPose> getEstimatedGlobalPose(Pose2d prevEstimatedRobotPose, PhotonPipelineResult result)
private Optional<EstimatedRobotPose> getEstimatedGlobalPose(Pose2d prevEstimatedRobotPose, PhotonPipelineResult result)

return;
}

var result = results.get(results.size() - 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment on the possible multithreading opportunity here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants