This table lists the benchmark results for the high-res multi-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.

Since we plan to add additional datasets soon (~ end of September), which will likely change the ranking, the average scores are currently still hidden.




Method Infoelect.facadekickermeadowofficerelie.terra.
sort bysort bysort bysort bysort bysort bysort by
COLMAPcopyleft75.29 162.95 163.62 149.96 247.32 175.50 175.33 2
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
CMPMVSbinary62.97 251.84 351.98 256.71 142.87 270.37 275.91 1
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011
PMVScopyleft41.88 353.50 233.33 335.46 328.90 354.63 357.88 3
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
MVEpermissive10.52 434.14 414.79 46.53 48.79 441.63 438.11 4
Simon Fuhrmann, Fabian Langguth, Michael Goesele: MVE - A Multi-View Reconstruction Environment. EUROGRAPHICS Workshops on Graphics and Cultural Heritage (2014)