2019-06-16: Added the SLAM Benchmark

The SLAM benchmark presented in our new paper, "BAD SLAM: Bundle Adjusted Direct RGB-D SLAM" (T. Schöps, T. Sattler, M. Pollefeys), has been added to the website.

2018-04-16: Added pre-rendered depth maps for training datasets for convenience

We now provide archives ending on _depth.7z for all multi-view training datasets which contain rendered depth maps of the ground truth point clouds for all dataset images. The depth maps correspond to the original (distorted) versions of the images and account for the occlusion mesh and image masks. The format is described at the end of the training data section of the documentation for multi-view data.

2018-02-05: Open source release of the dataset pipeline

The dataset processing pipeline, which we used to prepare the benchmark datasets, is now available as open source under the BSD license on GitHub. The evaluation tools and sample code, which were already available before separately, can also be found there in the ETH3D project.

2017-10-04: Extension with more data

We added 12 high-res multi-view datasets, 5 low-res many-view datasets, and 20 two-view stereo datasets. Those datasets become the new test set, while all previous test set datasets were converted to training datasets. This way, all datasets which are mentioned in our paper are now available with complete ground truth, such that all figures from the paper can be reproduced.

In addition, the occlusion data which we used to render depth maps from the laser scans is now also available for multi-view training datasets. It is described in this section in the documentation.

Furthermore, for the newly added multi-camera rig videos, we also provide the measurements of two IMUs on the cameras which may be useful for visual-inertial odometry experiments. This data is described here in the documentation.

For the multi-camera rig training videos, we now also provide rectified two-view stereo frames with ground truth disparity for each image pair in the video, in the same format as the individual two-view stereo frames. The ground truth quality may be slightly worse for the videos since no additional image masking was done, nevertheless the high amount of data may be helpful for training two-view stereo algorithms. This data is described here in the documentation.

We also removed a few stereo pairs with questionable calibration from the benchmark. If you notice any remaining issues, please let us know.

2017-07-19: Initial release of the dataset