Capture rig for real-world variation

Our rigs combine synchronized RGB, depth, LiDAR, and IMU streams to keep alignment tight even in reflective or low-light spaces. Each scene includes calibration data so downstream users can reproduce camera intrinsics and extrinsics.

Rig snapshot
  • Timecode-synced sensors with per-frame calibration.
  • Rolling capture logs with environment notes.
  • Privacy-safe workflows with on-device redaction.

Reconstruction with physics-aware materials

We fuse multi-view geometry with material priors to keep roughness, albedo, and normal maps consistent. This helps embodied agents reason about friction, contact patches, and affordances.

“For manipulation tasks, material accuracy often matters more than sheer texture resolution.”

Quality checks that scale

Automated tests catch depth dropouts, exposure mismatches, and drift before reconstruction. Human reviewers only handle edge cases, keeping throughput high.

Automated

Histogram checks, drift detection, semantic completeness, and motion blur scoring.

Human-in-loop

Rapid visual sweeps for occlusions, reflective failures, and unusual layouts.

Delivery formats

Every scene is available as watertight meshes, NeRF exports, and trajectory bundles for imitation learning. Metadata travels alongside: unit scales, coordinate frames, material tags, and scene graphs.

Need a specific environment?

Tell us the constraints you need—materials, lighting, actors, or edge cases—and we can schedule a targeted capture run.

Contact the team
Demo: capture-to-reconstruction pipeline in motion.
Captured environment preview
Sample environment with semantic labels and material properties.