This table lists the benchmark results for the low-res many-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.




Method Infoalllow-res
many-view
indooroutdoorlakesidesand boxstorage roomstorage room 2tunnel
sort bysorted bysort bysort bysort bysort bysort bysort bysort by
LTVRE69.57 153.52 145.46 158.89 158.76 160.60 243.91 147.01 157.32 2
Andreas Kuhn, Heiko Hirschmüller, Daniel Scharstein, Helmut Mayer: A TV Prior for High-Quality Scalable Multi-View Stereo Reconstruction. International Journal of Computer Vision 2016
COLMAPcopyleft66.92 252.32 242.45 258.89 156.18 261.09 138.61 246.28 259.41 1
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
PMVScopyleft37.38 421.09 311.49 427.48 324.09 344.44 315.98 37.01 413.92 4
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
MVEpermissive26.22 516.26 416.97 315.79 411.75 419.40 514.45 419.48 316.21 3
Simon Fuhrmann, Fabian Langguth, Michael Goesele: MVE - A Multi-View Reconstruction Environment. EUROGRAPHICS Workshops on Graphics and Cultural Heritage (2014)
CMPMVSbinary51.72 37.38 50.03 512.27 52.37 534.46 40.06 50.00 50.00 5
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011