Skip to content

Autonomous Arm Controller #219

@Terracom12

Description

@Terracom12

As per the 2025 rules, the arm is now required to autonomously type in an arbitrary input to a keyboard.

This will almost certainly require that #97 is implemented.
Past this, we will need some sort of image recognition to accurately press keys. As of now, some ideas are:

  • Manually align the rover, then reset the arm to be in a known orientation. A non-moving camera (the Zed) can be used to determine a 3D bounding box for the keyboard, and the arm may move to pre-determined offsets based on the detected boundaries. This has the potential of being simpler to implement, but also probably much less accurate.
  • Align the end effector camera so the entire keyboard is in view, then use a text recognition algorithm to determine where each key is located. As this camera does not currently have depth perception, the output positions will probably be inaccurate.

Ideally, I think we'd have a combination of the above two approaches. The second would provide continuous feedback as the arm approaches the position given by the first, and we may be able to utilize the depth perception data from the Zed to determine the relative position of the key based on the data from the end effector camera.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions