This table lists the benchmark results for the low-res many-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Methods with suffix _ROB may participate in the Robust Vision Challenge.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.




Method Infoalllow-res
many-view
indooroutdoorlakesidesand boxstorage roomstorage room 2tunnel
sort bysorted bysort bysort bysort bysort bysort bysort bysort by
DeepC-MVS_fast98.69 196.77 195.30 197.74 198.24 297.85 392.74 297.86 197.13 1
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
DeepPCF-MVS98.18 396.68 295.10 297.74 198.30 197.78 592.41 397.79 297.12 2
DeepC-MVS98.35 296.09 394.20 1197.34 397.94 497.39 1190.85 1497.56 396.71 4
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
COLMAP(SR)96.07 494.39 897.19 596.86 897.92 291.54 1097.24 896.79 3
TAPA-MVS(SR)95.82 594.80 396.49 996.19 1297.58 1092.04 497.56 395.71 12
PCF-MVS97.08 1495.68 693.36 1597.23 498.16 396.93 1591.43 1195.29 2196.61 5
Andreas Kuhn, Shan Lin, Oliver Erdler: Plane Completion and Filtering for Multi-View Stereo Reconstruction. GCPR 2019
ACMM97.58 595.68 694.49 696.48 1096.12 1397.20 1492.04 496.94 1196.10 9
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
ACMH+97.24 1095.61 893.77 1496.83 696.69 1097.71 790.70 1596.85 1296.09 10
TAPA-MVS97.07 1595.48 994.55 496.10 1295.68 1697.30 1291.73 897.37 695.32 15
Andrea Romanoni, Matteo Matteucci: TAPA-MVS: Textureless-Aware PAtchMatch Multi-View Stereo. ICCV 2019
COLMAP(base)95.42 1094.03 1396.35 1195.96 1496.92 1691.10 1396.97 996.16 8
PLCcopyleft97.94 495.35 1194.28 1096.06 1395.26 2096.59 1891.61 996.96 1096.34 7
Jie Liao, Yanping Fu, Qingan Yan, Chunxia xiao: Pyramid Multi-View Stereo with Local Consistency. Pacific Graphics 2019
ACMH97.28 895.29 1293.25 1696.65 796.32 1197.93 189.82 1896.68 1595.69 13
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
ACMP97.20 1195.11 1393.01 2096.52 895.68 1697.26 1389.74 1996.27 1796.61 5
Qingshan Xu and Wenbing Tao: Planar Prior Assisted PatchMatch Multi-View Stereo. AAAI 2020
LTVRE_ROB97.16 1295.10 1494.13 1295.75 1495.69 1595.73 2292.87 195.39 2095.83 11
Andreas Kuhn, Heiko Hirschmüller, Daniel Scharstein, Helmut Mayer: A TV Prior for High-Quality Scalable Multi-View Stereo Reconstruction. International Journal of Computer Vision 2016
GSE95.00 1594.33 995.45 1794.81 2196.53 2091.98 696.68 1595.02 17
A-TVSNet + Gipumacopyleft94.96 1694.54 595.24 1995.31 1996.58 1991.82 797.27 793.83 18
BP-MVSNet94.57 1793.20 1795.49 1596.77 994.34 3089.67 2096.74 1395.35 14
Christian Sormann, Patrick Knöbelreiter, Andreas Kuhn, Mattia Rossi, Thomas Pock, Friedrich Fraundorfer: BP-MVSNet: Belief-Propagation-Layers for Multi-View-Stereo. 3DV 2020
COLMAP_ROBcopyleft97.56 694.43 1893.09 1895.33 1894.55 2296.37 2189.45 2196.72 1495.07 16
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
IB-MVS95.67 1894.34 1994.41 794.29 2295.45 1897.77 691.37 1297.44 589.67 27
Christian Sormann, Mattia Rossi, Andreas Kuhn and Friedrich Fraundorfer: IB-MVS: An Iterative Algorithm for Deep Multi-View Stereo based on Binary Decisions. BMVC 2021
HY-MVS97.30 793.59 2093.02 1993.97 2392.85 2796.87 1790.03 1796.00 1892.19 20
3Dnovator97.25 992.47 2187.97 2995.47 1697.93 597.85 386.02 2589.92 2990.64 23
3Dnovator+97.12 1392.17 2287.57 3095.24 1997.49 697.68 885.24 2889.91 3090.54 24
LPCS92.07 2392.93 2191.49 2890.32 3293.29 3290.66 1695.20 2290.87 22
OpenMVScopyleft96.50 1691.77 2486.65 3295.19 2197.38 797.66 984.60 3188.69 3490.53 25
R-MVSNet90.93 2589.80 2391.68 2790.81 3194.15 3187.67 2291.93 2790.07 26
PVSNet_094.43 1990.46 2685.77 3393.59 2493.82 2495.33 2584.62 3086.92 3691.61 21
test_112689.66 2788.06 2890.73 3093.40 2595.29 2684.99 2991.13 2883.50 35
test_120589.53 2889.08 2689.83 3392.63 2895.71 2385.34 2792.82 2681.15 43
CPR_FA89.35 2988.37 2790.00 3289.38 3487.69 4287.29 2489.46 3292.93 19
PVSNet96.02 1789.22 3084.53 3792.35 2593.29 2694.60 2985.85 2683.20 4089.17 28
CIDER88.55 3183.13 3992.16 2693.95 2395.61 2483.79 3482.47 4386.92 30
Qingshan Xu and Wenbing Tao: Learning Inverse Depth Regression for Multi-View Stereo with Correlation Cost Volume. AAAI 2020
ANet-0.7587.89 3289.58 2486.76 3484.37 4790.29 3684.51 3294.65 2385.61 33
OpenMVS_ROBcopyleft92.34 2087.46 3381.75 4091.27 2992.55 2995.21 2781.21 3882.29 4586.05 31
P-MVSNet85.35 3491.57 2281.21 4578.76 5582.87 5087.56 2395.58 1982.00 40
ANet85.16 3585.20 3585.14 3884.37 4790.29 3682.00 3688.39 3580.78 45
unMVSv183.89 3684.37 3883.58 4184.00 5185.63 4483.17 3585.56 3781.11 44
metmvs_fine83.50 3784.90 3682.56 4384.72 4281.80 5381.09 3988.72 3381.17 42
unsupervisedMVS_cas83.12 3878.75 4186.04 3685.85 3789.90 3875.64 4381.86 4782.36 38
A1Net82.90 3989.39 2578.57 5485.19 3963.18 6884.36 3394.42 2487.34 29
mvs_zhu_103082.18 4078.09 4684.91 3982.35 5288.99 3971.71 5184.46 3983.39 36
RMVSNet80.62 4186.71 3176.57 5684.14 4978.09 5580.22 4093.20 2567.48 66
Pnet_fast80.38 4265.52 6790.29 3190.03 3394.91 2860.54 6670.51 6985.93 32
AttMVS80.32 4378.27 4481.68 4478.98 5486.16 4379.76 4176.79 6079.89 46
Pnet-new-79.46 4469.54 6086.08 3584.76 4190.31 3565.13 5773.95 6583.16 37
CasMVSNet(SR_A)79.00 4569.56 5985.29 3785.95 3692.82 3362.08 6377.05 5677.11 47
DPSNet78.82 4676.20 4980.57 4984.50 4382.32 5175.56 4476.84 5774.88 54
hgnet78.82 4676.20 4980.57 4984.50 4382.32 5175.56 4476.84 5774.88 54
Pnet-blend78.13 4873.62 5481.13 4684.46 4588.20 4066.50 5480.74 4970.74 64
Pnet-blend++78.13 4873.62 5481.13 4684.46 4588.20 4066.50 5480.74 4970.74 64
CasMVSNet(base)78.10 5068.51 6384.50 4085.47 3891.56 3459.79 6877.22 5576.47 48
Pnet-eth77.75 5185.52 3472.57 6175.57 5765.89 6681.44 3789.61 3176.26 49
MVSCRF77.21 5274.57 5278.98 5379.29 5383.87 4875.01 4674.12 6473.78 59
MVEpermissive76.82 2176.63 5377.52 4776.03 5870.40 6885.49 4675.68 4279.36 5172.20 61
Simon Fuhrmann, Fabian Langguth, Michael Goesele: MVE - A Multi-View Reconstruction Environment. EUROGRAPHICS Workshops on Graphics and Cultural Heritage (2014)
MVSNet_plusplus76.50 5466.38 6583.25 4291.25 3073.79 6053.62 7479.14 5284.70 34
example74.48 5565.59 6680.41 5184.10 5083.16 4962.82 6268.36 7173.98 58
Snet74.39 5664.28 6981.13 4688.04 3573.18 6360.15 6768.41 7082.18 39
MVSNet73.65 5770.01 5876.07 5776.39 5677.59 5661.93 6478.09 5474.23 57
PSD-MVSNet73.38 5878.51 4269.96 6472.98 6262.49 7074.33 4782.69 4274.41 56
SGNet73.32 5978.38 4369.94 6572.09 6362.62 6973.95 4882.81 4175.11 53
test373.10 6078.15 4569.73 6671.16 6562.14 7273.89 4982.42 4475.89 51
MVSNet + Gipuma72.61 6171.20 5673.54 5975.39 5873.46 6265.60 5676.81 5971.78 62
firsttry72.49 6275.56 5170.44 6368.80 7170.25 6469.31 5381.82 4872.28 60
TVSNet72.34 6376.78 4869.38 6770.64 6761.58 7471.28 5282.27 4675.92 50
MVSNet_++71.81 6459.04 7480.33 5284.86 4074.23 5933.06 7685.02 3881.88 41
test_112471.63 6563.97 7076.73 5569.39 7085.55 4556.73 6971.21 6775.26 52
CasMVSNet(SR_B)70.07 6671.18 5769.33 6869.73 6977.06 5764.06 5878.31 5361.20 69
F/T MVSNet+Gipuma69.80 6764.46 6873.36 6075.05 5973.57 6160.74 6568.18 7271.46 63
QQQNet67.18 6874.04 5362.60 7171.02 6662.30 7173.86 5074.22 6154.48 73
unMVSmet66.94 6960.67 7371.11 6271.91 6479.70 5454.62 7166.73 7361.74 68
CCVNet64.24 7067.70 6461.94 7265.09 7266.24 6563.67 5971.73 6654.48 73
confMetMVS63.51 7160.95 7265.21 7060.95 7576.48 5855.53 7066.38 7458.21 70
ternet62.40 7268.72 6158.19 7463.36 7354.86 7563.22 6074.22 6156.34 71
SVVNet62.40 7268.72 6158.19 7463.36 7354.86 7563.22 6074.22 6156.34 71
test_1120copyleft58.74 7457.03 7559.88 7349.42 7764.79 6749.65 7564.41 7565.45 67
PMVScopyleft70.75 2258.19 7543.22 7668.18 6973.06 6184.09 4754.23 7232.21 7647.37 75
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
Cas-MVS_preliminary56.71 7662.57 7152.80 7650.07 7661.96 7354.15 7371.00 6846.38 76
FADENet18.58 7715.51 7720.63 7721.45 7832.74 7815.09 7715.93 777.68 77
CMPMVSbinary69.68 2311.01 780.97 7817.70 7810.09 7943.01 771.93 780.00 780.00 78
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011
dnet0.00 790.00 790.00 790.00 800.00 790.00 790.00 780.00 78
UnsupFinetunedMVSNet75.05 59