Conversation
…pose estimation for the vision subsystem.
| var stateStdDevs = VecBuilder.fill(0.05, 0.05, Units.degreesToRadians(5)); // x, y, heading | ||
| var visionStdDevs = VecBuilder.fill(0.5, 0.5, Units.degreesToRadians(30)); // Higher uncertainty for vision |
There was a problem hiding this comment.
What do these numbers mean? Where did you find them?
There was a problem hiding this comment.
stateStdDevs is base odometry uncertainty
- 0.05 meters (5cm) for X and Y position
- 5 degrees for heading These are relatively small because wheel odometry is fairly precise over short distances
visionStdDevs, default vision uncertainty
- 0.5 meters (50cm) for X and Y position
- 30 degrees for heading. These are larger because vision measurements tend to be less precise than odometry if we don't see a lot of Apriltags
These values are used by the SwerveDrivePoseEstimator as default "trust levels," lower numbers mean we trust that measurement more. We trust odometry more than vision by default, but then adjust the vision uncertainty based on:
Number of visible tags (more tags = more trust)
Distance to tags (further = less trust)
| // Scale vision measurement standard deviations based on distance from robot to | ||
| // tags | ||
| // and number of tags visible - this is handled by the Vision subsystem passing | ||
| // appropriate stdDevs |
There was a problem hiding this comment.
If this logic is handled by the vision subsystem, this comment should be located in the vision subsystem and not here
| public static final double VISION_STD_DEV_BASE_XY = 0.5; // meters | ||
| public static final double VISION_STD_DEV_BASE_THETA = Units.degreesToRadians(30); | ||
| public static final double VISION_STD_DEV_MULTI_XY = 0.1; | ||
| public static final double VISION_STD_DEV_MULTI_THETA = Units.degreesToRadians(10); | ||
| public static final double VISION_DISTANCE_SCALE = 0.2; // How much to increase uncertainty per meter |
There was a problem hiding this comment.
What do these numbers mean? I'm assuming they'll need to be updated for our robot? How do we find/calculate them?
There was a problem hiding this comment.
From what I know, to tune these, start with these default values and then:
If the robot seems too jumpy: increase the numbers (trust vision less)
If the robot seems slow to correct: decrease the numbers (trust vision more)
| case SIM: | ||
| // Sim robot, instantiate physics sim IO implementations | ||
| _drive = new Drive(new GyroIO() {}, new ModuleIOSim(), new ModuleIOSim(), new ModuleIOSim(), new ModuleIOSim()); | ||
| _vision = new Vision(_drive, new VisionIO() {}); // Empty implementation for sim |
There was a problem hiding this comment.
I've been thinking about this one off and on. PhotonLib works entirely based on NetworkTables, so it should be possible to launch a local version of PhotonVision on a laptop when running the simulator and have it connect to the virtual robot. I think we can use the VisionIOPhotonLib class here:
| _vision = new Vision(_drive, new VisionIO() {}); // Empty implementation for sim | |
| _vision = new Vision(_drive, new VisionIOPhotonLib(_drive)); |
| // Subsystems | ||
| private final Drive _drive; | ||
| @SuppressWarnings("unused") | ||
| private final Vision _vision; // Add this field |
There was a problem hiding this comment.
What does your comment on this line mean?
| private final PhotonCamera _camera; | ||
| private double _captureTimesStamp = 0.0; | ||
| private Pose2d _pose = new Pose2d(); | ||
| private boolean _hasPose = false; | ||
| private double[] _distances = new double[] {}; | ||
| private int[] _targetIds = new int[] {}; | ||
| private double[] _targetYaws = new double[] {}; | ||
| private int _numProcessedTargets; | ||
| private AprilTagFieldLayout fieldLayout; | ||
| private PhotonPoseEstimator _poseEstimator; |
There was a problem hiding this comment.
The field layout and pose estimator can both be final. Also, fieldLayout doesn't follow the naming convention we use for member variables:
| private final PhotonCamera _camera; | |
| private double _captureTimesStamp = 0.0; | |
| private Pose2d _pose = new Pose2d(); | |
| private boolean _hasPose = false; | |
| private double[] _distances = new double[] {}; | |
| private int[] _targetIds = new int[] {}; | |
| private double[] _targetYaws = new double[] {}; | |
| private int _numProcessedTargets; | |
| private AprilTagFieldLayout fieldLayout; | |
| private PhotonPoseEstimator _poseEstimator; | |
| private final PhotonCamera _camera; | |
| private final AprilTagFieldLayout _fieldLayout; | |
| private final PhotonPoseEstimator _poseEstimator; | |
| private double _captureTimesStamp = 0.0; | |
| private Pose2d _pose = new Pose2d(); | |
| private boolean _hasPose = false; | |
| private double[] _distances = new double[] {}; | |
| private int[] _targetIds = new int[] {}; | |
| private double[] _targetYaws = new double[] {}; | |
| private int _numProcessedTargets; |
| // Load the 2024 field layout | ||
| fieldLayout = AprilTagFieldLayout.loadFromResource(AprilTagFields.k2024Crescendo.m_resourceFile); |
There was a problem hiding this comment.
We want to load the 2025 layout for REEFSCAPE instead
| // get the latest result (last in the list) | ||
| var result = results.get(results.size() - 1); |
There was a problem hiding this comment.
I'm assuming this gets whatever the latest result is in case there are multiple results available? We could probably think about changing this to use the multithreaded "multi-input" strategy the drivebase uses for odometry with its queues. That might work here as well and could possibly provide more vision data to the odometer.
| }); | ||
| } | ||
|
|
||
| public Optional<EstimatedRobotPose> getEstimatedGlobalPose(Pose2d prevEstimatedRobotPose, PhotonPipelineResult result) |
There was a problem hiding this comment.
This method is only visible in the VisionIOPhotonLib class, and although we create an instance of this in RobotContainer, we'll refer to it as a VisionIO object at that point meaning we won't have visibility to this method. Because of this, I'd like this method to be private.
| public Optional<EstimatedRobotPose> getEstimatedGlobalPose(Pose2d prevEstimatedRobotPose, PhotonPipelineResult result) | |
| private Optional<EstimatedRobotPose> getEstimatedGlobalPose(Pose2d prevEstimatedRobotPose, PhotonPipelineResult result) |
| return; | ||
| } | ||
|
|
||
| var result = results.get(results.size() - 1); |
There was a problem hiding this comment.
Same comment on the possible multithreading opportunity here
Added a new Vision subsystem to handle vision processing and integration with the drive system.
Updated Constants.java to include vision-related constants
Modified the Drive class to configure state estimation with standard deviations.
Updated RobotContainer to instantiate and integrate the new Vision subsystem