This table lists the benchmark results for the low-res many-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.

Since we plan to add additional datasets soon (~ end of September), which will likely change the ranking, the average scores are currently still hidden.




Method Infoelect.forestterra.
sort bysort bysort by
COLMAPcopyleft52.31 161.83 166.23 1
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
PMVScopyleft11.38 226.05 259.80 2
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
CMPMVSbinary0.00 30.20 347.47 3
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011