Synchronized global shutter cameras

  • No rolling shutter modeling required.
  • Images of different cameras can be directly used to estimate a single pose.

Accurate calibration

  • Generic camera model.
  • Undistorted images provided for convenience.

Rich visualizations

  • Interactive cumulative result graphs for comparing methods.
  • 3D trajectory display of all individual results.

The SLAM benchmark consists of videos recorded with a custom camera rig that can be used to evaluate visual-inertial mono, stereo, and RGB-D SLAM.
56 training and 35 test datasets were recorded in a motion capturing system which provides ground truth poses.
5 additional training sequences were recorded outside of this system, for which ground truth was determined using Structure-from-Motion.

For a detailed description of the data and the format in which it is provided, see Documentation.
For downloading the datasets, go to Datasets.
The benchmark results are displayed on the Benchmark page.
For open-source code associated with the BAD SLAM paper, see the ETH3D project on GitHub.