This table lists the benchmark results for the low-res many-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Methods with suffix _ROB may participate in the Robust Vision Challenge.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.




Method Infoalllow-res
many-view
indooroutdoorlakesidesand boxstorage roomstorage room 2tunnel
sort bysorted bysort bysort bysort bysort bysort bysort bysort by
3Dnovator80.37 769.73 165.51 372.55 1082.32 465.67 1557.53 373.48 169.67 13
LTVRE_ROB86.10 169.45 258.19 776.95 281.50 678.07 355.83 560.55 1471.29 9
Andreas Kuhn, Heiko Hirschmüller, Daniel Scharstein, Helmut Mayer: A TV Prior for High-Quality Scalable Multi-View Stereo Reconstruction. International Journal of Computer Vision 2016
3Dnovator+83.92 269.39 362.05 574.29 781.76 564.89 1752.69 771.42 376.22 1
tmmvs66.88 456.70 1073.68 981.20 766.01 1350.58 862.81 1073.82 6
DeepPCF-MVS81.24 566.76 554.74 1374.77 576.18 1173.22 546.42 1463.06 974.91 3
DeepC-MVS_fast80.27 865.96 654.23 1473.79 875.88 1271.59 645.37 1763.08 773.89 5
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
DeepC-MVS82.31 465.89 757.22 971.66 1174.01 1570.18 747.73 1166.72 670.79 10
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
MVSNet65.82 852.08 1674.99 488.07 385.21 141.08 2063.08 751.67 38
CasMVSNet(SR_B)65.31 944.19 2779.39 191.10 174.56 435.00 3353.38 2772.51 7
AttMVS64.84 1057.49 869.74 1388.51 250.94 3445.77 1569.21 569.78 12
OpenMVScopyleft76.72 1364.07 1159.05 667.42 1575.57 1462.58 1947.98 1070.11 464.12 19
tm-dncc63.90 1248.20 2374.37 681.20 766.01 1338.20 2458.21 1875.91 2
test_1120copyleft63.22 1368.85 159.47 3178.81 1051.99 3379.45 158.25 1747.61 46
mvs_zhu_103062.39 1450.41 2070.37 1275.70 1366.39 1240.10 2260.73 1369.02 14
Pnet-new-62.28 1564.63 460.72 2368.45 2045.71 4356.04 473.22 267.99 15
test_112461.69 1667.72 257.67 3471.65 1746.06 4275.31 260.13 1555.30 33
COLMAP_ROBcopyleft83.01 361.51 1750.83 1968.63 1465.07 2466.61 940.64 2161.02 1274.20 4
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
test_112658.81 1854.80 1161.48 2170.94 1856.27 2753.41 656.19 2157.23 31
TAPA-MVS77.73 1258.58 1953.61 1561.89 1964.81 2660.73 2145.40 1661.82 1160.14 28
Andrea Romanoni, Matteo Matteucci: TAPA-MVS: Textureless-Aware PAtchMatch Multi-View Stereo. ICCV 2019
COLMAP(base)57.56 2045.74 2465.45 1663.08 2861.37 2039.37 2352.10 3071.91 8
PMVScopyleft80.48 657.42 2129.11 4876.30 379.61 979.41 234.22 3524.00 5469.88 11
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
TAPA-MVS(SR)57.16 2254.76 1258.76 3267.29 2248.19 3850.15 959.37 1660.81 24
PCF-MVS74.62 1556.56 2343.50 3065.27 1773.58 1659.98 2437.23 2749.76 3362.24 22
Andreas Kuhn, Shan Lin, Oliver Erdler: Plane Completion and Filtering for Multi-View Stereo Reconstruction. GCPR 2019
Pnet-blend++56.52 2451.51 1759.87 2662.39 2966.50 1047.44 1255.57 2250.71 40
Pnet-blend56.52 2451.51 1759.87 2662.39 2966.50 1047.44 1255.57 2250.71 40
CasMVSNet(base)55.44 2643.59 2963.33 1863.97 2765.66 1632.47 3754.72 2460.37 27
P-MVSNet54.95 2750.02 2258.24 3366.31 2347.74 4041.88 1958.16 1960.67 25
PLCcopyleft73.85 1654.06 2845.01 2560.08 2455.47 3760.30 2337.58 2552.44 2964.49 17
Jie Liao, Yanping Fu, Qingan Yan, Chunxia xiao: Pyramid Multi-View Stereo with Local Consistency. Pacific Graphics 2019
LPCS53.86 2942.82 3161.22 2267.38 2154.15 2837.38 2648.27 3662.14 23
CasMVSNet(SR_A)53.46 3044.06 2859.72 2957.37 3362.68 1834.23 3453.90 2659.09 30
GSE53.18 3140.31 3661.77 2069.15 1952.25 3235.20 3245.43 3963.90 20
COLMAP(SR)52.61 3241.79 3359.83 2858.75 3154.00 2937.15 2846.43 3866.74 16
ACMM79.39 952.37 3340.99 3559.96 2558.56 3260.72 2231.00 3850.98 3160.62 26
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
OpenMVS_ROBcopyleft70.19 1751.98 3450.29 2153.10 3565.07 2439.38 4842.82 1857.77 2054.85 34
ACMP79.16 1051.84 3540.13 3859.65 3057.20 3458.68 2530.46 3949.80 3263.09 21
Qingshan Xu and Wenbing Tao: Planar Prior Assisted PatchMatch Multi-View Stereo. AAAI 2020
ACMH+77.89 1147.08 3642.29 3250.27 4054.95 3839.16 4935.24 3049.34 3456.70 32
CIDER46.95 3744.89 2648.33 4250.37 4346.20 4135.21 3154.58 2548.41 44
Qingshan Xu and Wenbing Tao: Learning Inverse Depth Regression for Multi-View Stereo with Correlation Cost Volume. AAAI 2020
ACMH76.49 1445.64 3841.63 3448.31 4353.97 4038.96 5034.00 3649.25 3552.00 36
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
HY-MVS64.64 1844.49 3940.14 3747.40 4452.03 4242.24 4636.35 2943.93 4147.92 45
Pnet_fast44.24 4035.70 3949.94 4146.98 4558.15 2629.16 4142.24 4444.70 48
PVSNet_LR43.87 4133.54 4250.75 3949.15 4450.81 3523.32 4743.77 4252.30 35
ANet43.24 4229.49 4652.40 3655.77 3552.26 3027.30 4331.68 4849.19 43
vp_mvsnet42.75 4330.63 4450.83 3853.01 4147.99 398.57 7352.69 2851.49 39
IB-MVS62.13 1939.31 4435.63 4041.76 5046.97 4638.95 5126.86 4444.41 4039.36 50
Christian Sormann, Mattia Rossi, Andreas Kuhn and Friedrich Fraundorfer: IB-MVS: An Iterative Algorithm for Deep Multi-View Stereo based on Binary Decisions. BMVC 2021
PVSNet_051.08 2239.05 4529.60 4545.34 4544.29 4739.84 4726.44 4532.77 4751.89 37
PVSNet58.17 2138.39 4631.41 4343.04 4641.89 4837.34 5229.69 4033.13 4649.88 42
R-MVSNet37.45 4734.59 4139.35 5141.64 4942.85 4525.44 4643.75 4333.57 54
test_120536.84 4829.16 4741.97 4941.52 5049.04 3627.34 4230.98 5035.34 53
ANet-0.7536.48 4914.29 6551.28 3755.77 3552.26 3010.99 6317.58 6445.82 47
MVSNet_plusplus36.25 5026.59 5042.70 4754.00 3914.25 705.76 8047.42 3759.84 29
MVSCRF34.84 5129.04 4938.71 5241.35 5144.92 4418.79 5139.30 4529.86 55
A-TVSNet + Gipumacopyleft33.17 5219.66 5442.17 4838.64 5448.93 3720.23 4819.09 6138.94 51
BP-MVSNet32.65 5323.64 5138.65 5340.81 5233.99 5419.28 4928.00 5141.15 49
Christian Sormann, Patrick Knöbelreiter, Andreas Kuhn, Mattia Rossi, Thomas Pock, Friedrich Fraundorfer: BP-MVSNet: Belief-Propagation-Layers for Multi-View-Stereo. 3DV 2020
unsupervisedMVS_cas26.15 5418.92 5530.97 5530.39 5634.23 5315.24 5422.60 5628.30 56
CMPMVSbinary59.41 2021.70 550.34 8335.93 5438.78 5369.01 80.69 850.00 840.00 85
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011
MVS_test_121.23 5616.85 5824.15 5731.31 5518.73 616.46 7927.23 5222.39 61
MVSNet_++21.10 5716.58 6124.11 5822.79 5810.88 761.86 8431.30 4938.65 52
Cas-MVS_preliminary20.88 5811.16 7427.36 5612.72 685.17 8111.58 6210.75 7864.20 18
Pnet-eth20.12 5918.76 5721.02 6024.61 5721.44 5713.26 5624.26 5317.02 67
Snet19.16 6019.74 5218.78 6122.45 5915.11 6716.74 5322.73 5518.76 65
F/T MVSNet+Gipuma18.31 6119.68 5317.39 6317.36 6313.22 7218.92 5020.43 5821.60 62
MVSNet + Gipuma17.51 6218.78 5616.66 6516.50 6612.75 7318.63 5218.92 6220.74 63
CCVNet17.35 6311.44 7121.29 5916.96 6521.70 5612.75 5910.13 7925.21 57
CPR_FA15.63 6416.76 5914.87 7112.15 7114.63 6913.13 5720.39 5917.84 66
A1Net14.88 6515.39 6214.54 7312.69 6911.35 7512.32 6018.45 6319.59 64
QQQNet14.60 6611.15 7516.90 6410.35 7415.13 669.88 7012.42 7325.21 57
SVVNet14.42 6711.43 7216.41 6610.16 7515.66 6310.44 6612.42 7323.41 59
ternet14.42 6711.43 7216.41 6610.16 7515.66 6310.44 6612.42 7323.41 59
test_mvsss14.34 698.77 7918.06 6216.23 6723.38 554.71 8112.84 7114.55 70
hgnet14.22 7013.23 6614.89 6918.80 6120.68 5910.37 6816.09 655.18 80
DPSNet14.22 7013.23 6614.89 6918.80 6120.68 5910.37 6816.09 655.18 80
TVSNet13.85 7212.75 6914.58 7211.20 7215.62 6510.72 6514.79 6816.93 68
test313.52 7312.47 7014.23 7410.80 7315.75 6210.84 6414.10 6916.14 69
MVEpermissive40.22 2312.38 7414.61 6310.89 787.98 8112.62 7413.54 5515.69 6712.06 74
Simon Fuhrmann, Fabian Langguth, Michael Goesele: MVE - A Multi-View Reconstruction Environment. EUROGRAPHICS Workshops on Graphics and Cultural Heritage (2014)
example11.91 757.31 8114.98 6820.88 6020.72 588.09 756.53 823.34 82
SGNet11.80 7610.60 7612.60 759.98 7714.69 688.98 7112.23 7613.13 72
unMVSmet11.69 7716.62 608.41 798.35 808.22 7912.26 6120.99 578.66 76
PSD-MVSNet11.08 789.72 7812.00 779.76 7813.96 718.14 7411.29 7712.27 73
firsttry10.67 798.36 8012.20 7612.25 7010.50 777.87 768.86 8013.86 71
RMVSNet9.14 8012.82 686.69 827.06 837.73 8012.89 5812.74 725.29 79
metmvs_fine9.11 8114.53 645.49 835.49 844.77 828.81 7220.25 606.22 78
confMetMVS8.11 829.85 776.95 817.36 824.72 836.48 7813.22 708.77 75
unMVSv17.70 836.95 828.20 808.59 799.59 786.96 776.95 816.43 77
FADENet0.15 840.18 840.13 850.22 860.09 850.23 860.12 830.07 84
dnet0.00 850.00 850.00 860.00 870.00 860.00 870.00 840.00 85
test_MVS4.45 82
test_robustmvs1.67 842.86 851.42 842.72 830.73 83
UnsupFinetunedMVSNet17.36 63