-
Notifications
You must be signed in to change notification settings - Fork 4
Description
When composing LENS_POSE_TRANSLATION with the OpenXR head/view poses, there's a consistent ~1 inch translation error in the resulting camera world position. Intrinsics and orientation seem correct, only a translational offset is consistent.
I've verified LENS_INTRINSIC_CALIBRATION (projected grid geometry matches, no distortion or scaling artifacts) and LENS_POSE_ROTATION. Reproduced across Unity (OVR head node), Unreal (MRUK + OpenXR), and StereoKit (OpenXR). OpenCV's solvePnP with sub-pixel checkerboard corners confirms that at the solved depth, projected points align perfectly with detected corners, so I don't believe intrinsics or orientation are the problem.
I think the root cause is that LENS_POSE_REFERENCE on Quest 3 returns GYROSCOPE, making LENS_POSE_TRANSLATION relative to the IMU. But the OpenXR head/view poses are relative to the head-tracking origin (between-eyes midpoint or eye-center).
I also noticed the native Camera2 docs mention that "examples on calculating position and rotation will follow soon." Is there a timeline for this? Or would it be possible to confirm there's an additional step, e.g., IMU-to-head-origin offset? Thanks!