This table lists the benchmark results for the low-res many-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Methods with suffix _ROB may participate in the Robust Vision Challenge.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.




Method Infoalllow-res
many-view
indooroutdoorlakesidesand boxstorage roomstorage room 2tunnel
sort bysorted bysort bysort bysort bysort bysort bysort bysort by
LTVRE_ROB75.46 154.46 142.09 762.70 167.45 865.97 339.56 544.63 1054.68 8
Andreas Kuhn, Heiko Hirschmüller, Daniel Scharstein, Helmut Mayer: A TV Prior for High-Quality Scalable Multi-View Stereo Reconstruction. International Journal of Computer Vision 2016
3Dnovator+73.19 254.26 246.24 459.61 871.65 348.87 1337.19 755.30 158.31 5
DeepPCF-MVS71.07 553.29 339.34 1162.59 265.59 1061.81 432.09 1246.60 860.36 1
3Dnovator65.95 1152.63 447.99 155.72 1169.72 748.62 1641.11 354.88 248.82 14
DeepC-MVS_fast69.89 752.44 539.25 1261.23 465.60 959.17 531.57 1546.93 758.91 2
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
tmmvs52.24 639.98 1060.41 670.28 452.42 1034.74 945.23 958.53 4
DeepC-MVS72.44 451.91 741.70 958.72 1062.91 1257.73 733.27 1150.12 655.51 6
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
CasMVSNet(SR_B)49.60 830.23 2662.52 379.42 156.08 824.08 2936.37 2352.07 10
tm-dncc49.39 932.78 2360.45 570.28 452.42 1025.02 2540.54 1658.66 3
MVSNet49.19 1033.57 2259.60 974.12 272.06 125.24 2441.90 1332.63 40
AttMVS48.45 1142.09 752.69 1470.22 637.46 2931.38 1652.80 450.38 12
OpenMVScopyleft62.51 1547.53 1242.80 650.68 1662.38 1345.79 2233.37 1052.22 543.87 21
COLMAP_ROBcopyleft72.78 346.46 1336.00 1653.43 1251.48 2053.72 927.60 2144.40 1155.08 7
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
mvs_zhu_103045.91 1435.22 1753.03 1358.12 1649.72 1228.56 1841.87 1451.26 11
PMVScopyleft70.70 644.31 1521.09 4259.79 764.24 1167.85 225.32 2316.87 5147.28 16
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
test_1120copyleft44.17 1646.51 342.61 2761.86 1436.78 3061.21 131.80 3329.19 47
Pnet-new-44.14 1746.92 242.29 2848.54 2330.28 4440.25 453.58 348.06 15
PCF-MVS63.80 1342.44 1830.04 2850.71 1561.29 1545.52 2324.81 2635.27 2845.31 18
Andreas Kuhn, Shan Lin, Oliver Erdler: Plane Completion and Filtering for Multi-View Stereo Reconstruction. GCPR 2019
test_112442.03 1945.75 539.55 3353.61 1731.40 4254.10 237.41 2033.63 37
COLMAP(base)41.98 2030.58 2449.58 1747.71 2547.83 1725.91 2235.25 2953.20 9
TAPA-MVS65.27 1241.55 2136.46 1544.94 2147.53 2646.42 2130.41 1742.51 1240.86 28
Andrea Romanoni, Matteo Matteucci: TAPA-MVS: Textureless-Aware PAtchMatch Multi-View Stereo. ICCV 2019
test_112640.73 2237.06 1443.17 2653.43 1839.06 2837.45 636.68 2237.03 32
TAPA-MVS(SR)40.23 2337.32 1342.17 2949.69 2234.04 3535.33 839.31 1842.79 24
CasMVSNet(base)39.99 2430.03 2946.63 1849.88 2147.19 2021.89 3738.17 1942.82 23
PLCcopyleft62.01 1638.52 2530.23 2644.04 2339.89 3747.28 1824.50 2735.97 2444.94 20
Jie Liao, Yanping Fu, Qingan Yan, Chunxia xiao: Pyramid Multi-View Stereo with Local Consistency. Pacific Graphics 2019
ACMM69.25 938.38 2627.21 3245.83 1946.34 2747.28 1819.82 3934.60 3043.86 22
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
CasMVSNet(SR_A)38.17 2730.38 2543.36 2543.84 3144.10 2623.45 3037.31 2142.14 25
ACMP69.50 837.98 2827.15 3445.19 2045.25 2844.25 2419.88 3834.43 3146.08 17
Qingshan Xu and Wenbing Tao: Planar Prior Assisted PatchMatch Multi-View Stereo. AAAI 2020
Pnet-blend++37.88 2933.76 2040.62 3140.91 3548.74 1431.73 1335.78 2532.22 42
Pnet-blend37.88 2933.76 2040.62 3140.91 3548.74 1431.73 1335.78 2532.22 42
COLMAP(SR)37.49 3127.16 3344.38 2244.04 2939.98 2724.12 2830.20 3649.12 13
P-MVSNet36.49 3234.17 1938.03 3442.00 3233.36 3728.21 2040.14 1738.72 30
GSE36.13 3324.74 3843.73 2452.95 1933.24 3922.50 3426.98 4345.01 19
LPCS35.46 3426.73 3641.28 3047.80 2435.29 3423.36 3130.10 3740.76 29
OpenMVS_ROBcopyleft54.93 1735.00 3534.45 1835.37 3844.02 3026.87 4928.29 1940.62 1535.22 34
CIDER32.10 3629.05 3034.13 4137.98 4033.17 4022.44 3635.66 2731.24 44
Qingshan Xu and Wenbing Tao: Learning Inverse Depth Regression for Multi-View Stereo with Correlation Cost Volume. AAAI 2020
ACMH+66.64 1031.68 3727.48 3134.49 4038.73 3826.92 4722.87 3332.08 3237.82 31
PVSNet_LR30.39 3821.48 4136.33 3635.81 4236.68 3114.80 4728.17 3936.51 33
ACMH63.62 1430.30 3926.87 3532.59 4337.65 4126.45 5122.46 3531.28 3433.67 36
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
ANet29.81 4018.89 4537.10 3541.36 3336.66 3218.37 4019.40 4833.27 38
Pnet_fast29.80 4121.76 4035.17 3932.69 4544.17 2518.13 4225.38 4428.65 48
HY-MVS49.31 1928.53 4225.35 3730.64 4534.50 4426.89 4823.13 3227.56 4130.55 45
vp_mvsnet27.00 4317.92 4733.05 4235.04 4331.50 414.95 7230.89 3532.60 41
PVSNet_036.71 2226.11 4418.07 4631.47 4431.85 4728.31 4616.41 4519.73 4734.24 35
IB-MVS49.67 1825.63 4522.23 3927.89 4932.43 4626.12 5216.47 4428.00 4025.13 50
Christian Sormann, Mattia Rossi, Andreas Kuhn and Friedrich Fraundorfer: IB-MVS: An Iterative Algorithm for Deep Multi-View Stereo based on Binary Decisions. BMVC 2021
PVSNet43.83 2125.27 4618.99 4429.47 4729.22 4926.53 5018.23 4119.75 4632.66 39
ANet-0.7524.93 478.13 6336.13 3741.36 3336.65 336.38 659.87 6430.38 46
MVSNet_plusplus24.27 4816.59 5029.38 4838.24 398.88 713.21 7929.98 3841.02 27
test_120523.54 4917.62 4827.49 5027.69 5033.27 3816.80 4318.44 4921.51 53
R-MVSNet23.44 5020.97 4325.09 5327.25 5128.79 4514.95 4626.99 4219.25 54
MVSCRF21.37 5117.18 4924.16 5425.89 5330.42 4311.65 4922.71 4516.19 56
BP-MVSNet20.82 5214.19 5125.24 5227.04 5222.63 5311.73 4816.64 5226.07 49
Christian Sormann, Patrick Knöbelreiter, Andreas Kuhn, Mattia Rossi, Thomas Pock, Friedrich Fraundorfer: BP-MVSNet: Belief-Propagation-Layers for Multi-View-Stereo. 3DV 2020
A-TVSNet + Gipumacopyleft20.66 5310.98 5527.12 5124.81 5433.59 3611.57 5010.38 6222.97 52
CMPMVSbinary48.73 2017.96 540.21 8329.80 4631.01 4858.39 60.41 850.00 840.00 85
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011
unsupervisedMVS_cas16.42 5511.26 5419.85 5519.70 5522.45 549.42 5413.10 5517.41 55
Cas-MVS_preliminary13.10 566.45 7417.53 567.84 682.89 807.20 595.71 7841.86 26
MVSNet_++13.01 579.72 5815.20 5714.41 576.62 761.03 8418.42 5024.57 51
MVS_test_112.27 589.64 6014.02 5818.81 5610.74 613.59 7815.68 5312.52 61
Snet11.53 5911.50 5211.55 6013.79 589.94 669.91 5313.09 5610.91 65
Pnet-eth11.01 6010.58 5711.30 6111.57 6012.95 607.59 5713.56 549.39 68
CCVNet10.60 616.72 7113.19 5910.10 6314.00 558.08 555.36 7915.47 57
F/T MVSNet+Gipuma10.33 6211.29 539.69 669.48 647.49 7211.57 5011.01 5912.11 62
MVSNet + Gipuma9.86 6310.74 569.27 679.03 667.15 7411.33 5210.16 6311.62 63
CPR_FA9.09 649.72 588.67 736.93 718.97 707.57 5811.87 5710.10 66
QQQNet8.84 656.32 7510.52 626.22 769.88 675.75 686.90 7115.47 57
SVVNet8.80 666.70 7210.20 636.29 7410.11 646.49 626.90 7114.21 59
ternet8.80 666.70 7210.20 636.29 7410.11 646.49 626.90 7114.21 59
A1Net8.71 688.78 618.67 737.67 697.00 757.17 6010.39 6111.32 64
TVSNet8.28 697.32 668.92 696.77 7210.23 636.28 678.36 689.75 67
test38.15 707.23 678.75 726.57 7310.32 626.42 648.05 699.38 69
hgnet8.15 707.20 688.77 7010.59 6113.03 585.73 698.68 652.70 79
DPSNet8.15 707.20 688.77 7010.59 6113.03 585.73 698.68 652.70 79
test_mvsss8.00 734.74 7910.17 659.03 6613.60 562.57 816.90 717.88 70
example7.10 744.06 819.13 6812.19 5913.49 574.65 743.46 821.70 82
SGNet7.01 756.02 767.67 755.97 779.57 685.19 716.86 757.46 72
MVEpermissive27.91 236.82 768.02 646.02 784.47 817.18 737.60 568.44 676.42 74
Simon Fuhrmann, Fabian Langguth, Michael Goesele: MVE - A Multi-View Reconstruction Environment. EUROGRAPHICS Workshops on Graphics and Cultural Heritage (2014)
PSD-MVSNet6.56 775.49 777.28 765.82 789.06 694.66 736.31 776.96 73
unMVSmet6.24 788.77 624.56 794.65 804.56 796.35 6611.19 584.46 76
firsttry6.03 794.56 807.01 777.22 706.16 774.38 764.75 807.67 71
metmvs_fine4.88 807.73 652.98 833.10 842.62 824.65 7410.82 603.22 78
RMVSNet4.54 816.78 703.04 823.81 832.71 816.89 616.66 762.61 81
confMetMVS4.28 825.07 783.76 814.09 822.57 833.21 796.93 704.62 75
unMVSv14.17 833.68 824.49 804.76 795.38 783.75 773.62 813.33 77
FADENet0.08 840.09 840.07 850.12 860.05 850.12 860.06 830.04 84
dnet0.00 850.00 850.00 860.00 870.00 860.00 870.00 840.00 85
test_MVS2.44 82
test_robustmvs0.87 841.53 850.72 841.46 830.38 83
UnsupFinetunedMVSNet9.48 64