This table lists the benchmark results for the low-res many-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Methods with suffix _ROB may participate in the Robust Vision Challenge.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.




Method Infoalllow-res
many-view
indooroutdoorlakesidesand boxstorage roomstorage room 2tunnel
sort bysorted bysort bysort bysort bysort bysort bysort bysort by
DeepPCF-MVS93.97 187.63 182.10 191.31 191.80 191.34 177.64 186.56 190.79 2
DeepC-MVS_fast93.89 286.59 280.39 290.73 291.00 390.77 275.90 384.89 490.41 3
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
DeepC-MVS93.07 385.88 379.73 589.98 389.82 690.09 574.93 584.53 590.03 4
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
tm-dncc85.31 480.20 388.72 587.91 987.16 1574.75 685.64 291.10 1
TAPA-MVS90.10 784.80 580.13 487.91 788.22 888.54 975.35 484.91 386.96 15
Andrea Romanoni, Matteo Matteucci: TAPA-MVS: Textureless-Aware PAtchMatch Multi-View Stereo. ICCV 2019
PCF-MVS89.48 1184.51 676.96 1189.54 491.30 288.76 773.35 980.56 1488.57 5
Andreas Kuhn, Shan Lin, Oliver Erdler: Plane Completion and Filtering for Multi-View Stereo Reconstruction. GCPR 2019
TAPA-MVS(SR)83.88 778.08 887.74 988.99 786.49 1873.17 1182.99 787.76 11
COLMAP(base)83.59 877.50 1087.65 1086.74 1587.68 1173.30 1081.70 1188.52 6
PLCcopyleft91.00 683.54 978.38 686.98 1485.38 1987.33 1373.45 783.31 688.23 10
Jie Liao, Yanping Fu, Qingan Yan, Chunxia xiao: Pyramid Multi-View Stereo with Local Consistency. Pacific Graphics 2019
GSE83.39 1077.82 987.10 1385.70 1887.27 1473.41 882.23 988.34 9
COLMAP(SR)83.28 1176.24 1487.97 687.61 1187.90 1072.76 1279.71 1688.39 8
ACMH+87.92 1482.46 1276.63 1386.35 1786.95 1384.52 2371.68 1381.58 1287.56 13
ACMP89.59 1082.31 1375.00 1687.18 1284.49 2488.57 868.43 2081.58 1288.49 7
Qingshan Xu and Wenbing Tao: Planar Prior Assisted PatchMatch Multi-View Stereo. AAAI 2020
LTVRE_ROB88.41 1382.23 1478.11 784.97 2086.24 1682.83 2976.93 279.30 1785.84 17
Andreas Kuhn, Heiko Hirschmüller, Daniel Scharstein, Helmut Mayer: A TV Prior for High-Quality Scalable Multi-View Stereo Reconstruction. International Journal of Computer Vision 2016
ACMM89.79 882.20 1575.34 1586.76 1584.94 2187.68 1168.85 1981.84 1087.67 12
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
3Dnovator91.36 582.19 1674.09 1887.59 1189.83 590.10 471.54 1476.63 2182.84 22
3Dnovator+91.43 481.91 1773.11 2187.78 890.06 490.17 370.40 1775.82 2483.12 19
COLMAP_ROBcopyleft87.81 1581.57 1874.74 1786.13 1883.81 2587.08 1769.46 1880.01 1587.49 14
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
IB-MVS87.33 1781.13 1976.89 1283.96 2386.75 1484.70 2271.43 1582.36 880.43 25
Christian Sormann, Mattia Rossi, Andreas Kuhn and Friedrich Fraundorfer: IB-MVS: An Iterative Algorithm for Deep Multi-View Stereo based on Binary Decisions. BMVC 2021
ACMH87.59 1680.55 2073.69 1985.11 1986.17 1782.81 3068.19 2179.20 1886.36 16
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
OpenMVScopyleft89.19 1280.38 2171.40 2386.37 1687.42 1289.29 667.70 2275.10 2582.40 24
HY-MVS89.66 979.55 2271.86 2284.69 2284.86 2286.32 1966.45 2377.26 2082.88 21
LPCS78.54 2373.62 2081.82 2679.59 3183.19 2771.25 1675.99 2382.69 23
tmmvs77.36 2466.00 2784.93 2187.91 987.16 1564.41 2667.60 3179.73 27
A-TVSNet + Gipumacopyleft77.14 2567.96 2683.26 2480.53 2886.26 2065.25 2470.67 2682.99 20
BP-MVSNet76.76 2668.58 2582.22 2584.51 2377.66 4160.98 3076.18 2284.48 18
Christian Sormann, Patrick Knöbelreiter, Andreas Kuhn, Mattia Rossi, Thomas Pock, Friedrich Fraundorfer: BP-MVSNet: Belief-Propagation-Layers for Multi-View-Stereo. 3DV 2020
PVSNet_082.17 1974.27 2764.50 2980.78 2781.77 2780.64 3461.11 2867.89 3079.92 26
PVSNet86.66 1873.29 2865.21 2878.67 3180.40 2977.98 3964.04 2766.38 3477.62 28
OpenMVS_ROBcopyleft81.14 2073.28 2963.11 3180.06 2982.70 2681.62 3361.11 2865.12 3675.87 30
CIDER72.92 3062.80 3279.66 3080.16 3083.22 2660.69 3164.90 3775.61 32
Qingshan Xu and Wenbing Tao: Learning Inverse Depth Regression for Multi-View Stereo with Correlation Cost Volume. AAAI 2020
test_112671.43 3157.57 3880.67 2885.16 2084.45 2455.16 3559.98 4272.40 34
test_120570.75 3262.35 3376.35 3578.70 3285.03 2155.63 3469.08 2765.34 45
PVSNet_LR70.64 3359.04 3678.38 3277.15 3581.75 3250.11 3967.97 2976.22 29
R-MVSNet70.54 3463.98 3074.91 3773.85 3977.87 4059.85 3268.11 2873.01 33
P-MVSNet70.42 3571.20 2469.90 4266.73 4672.02 4464.84 2577.55 1970.95 36
ANet-0.7567.51 3658.66 3773.41 3970.12 4279.37 3550.72 3766.61 3370.74 37
AttMVS65.89 3760.14 3469.73 4364.68 4975.04 4258.80 3361.47 4069.47 39
mvs_zhu_103064.43 3851.10 4173.32 4071.88 4178.97 3842.87 4159.32 4369.10 40
ANet63.85 3952.65 4071.32 4170.12 4279.37 3548.53 4056.77 4664.48 46
Pnet-new-63.42 4042.13 5777.61 3377.71 3479.30 3741.06 4443.20 6775.82 31
CasMVSNet(SR_A)63.08 4145.33 4974.92 3674.36 3683.73 2537.04 5553.62 5066.66 43
CasMVSNet(base)61.88 4243.72 5273.99 3874.03 3881.95 3134.99 6052.45 5265.99 44
Pnet_fast59.94 4335.26 7376.39 3474.15 3782.99 2825.02 7445.51 6172.03 35
CPR_FA58.26 4456.35 3959.54 5456.43 5855.34 5850.61 3862.09 3966.83 42
unsupervisedMVS_cas57.32 4544.73 5165.71 4563.76 5071.21 4536.03 5753.44 5162.15 49
MVS_test_156.68 4646.21 4663.66 4872.26 4062.56 5431.06 6561.35 4156.15 54
MVSNet56.22 4746.49 4562.70 5057.79 5567.16 5238.93 4854.05 4863.16 48
A1Net54.85 4859.98 3551.43 5847.46 6842.97 7352.72 3667.24 3263.86 47
MVSNet_plusplus54.10 4936.12 7166.09 4478.36 3349.55 6220.40 7951.84 5370.36 38
test_mvsss54.02 5040.84 5862.81 4962.37 5169.60 4927.56 7054.13 4756.46 53
Pnet-blend++53.68 5137.04 6664.76 4668.91 4470.02 4724.13 7549.96 5755.36 55
Pnet-blend53.68 5137.04 6664.76 4668.91 4470.02 4724.13 7549.96 5755.36 55
MVSCRF51.99 5337.64 6361.55 5258.88 5268.35 5034.99 6040.28 7057.43 52
MVSNet_++50.85 5437.40 6459.81 5365.34 4846.28 649.19 8465.60 3567.81 41
test_112450.28 5531.72 7562.65 5158.29 5470.82 4627.23 7236.21 7558.84 50
Snet49.32 5636.02 7258.18 5666.02 4750.24 6027.93 6944.12 6258.27 51
CasMVSNet(SR_B)48.99 5745.75 4751.15 5949.07 6662.78 5337.57 4953.93 4941.61 67
test345.71 5844.95 5046.21 6642.67 7244.26 6739.04 4750.87 5451.69 57
Pnet-eth45.62 5949.15 4343.27 7353.31 6129.96 8335.03 5963.26 3846.53 64
vp_mvsnet45.13 6025.47 7958.23 5553.74 6073.85 4319.74 8031.20 7647.10 63
unMVSv145.02 6143.12 5446.29 6547.83 6750.14 6142.67 4243.57 6340.92 68
RMVSNet44.67 6250.67 4240.67 7550.79 6241.21 7542.32 4359.02 4430.02 76
MVEpermissive50.73 2344.67 6245.57 4844.07 7135.90 7952.30 5940.89 4550.26 5544.00 66
Simon Fuhrmann, Fabian Langguth, Michael Goesele: MVE - A Multi-View Reconstruction Environment. EUROGRAPHICS Workshops on Graphics and Cultural Heritage (2014)
TVSNet44.65 6442.86 5645.84 6743.41 7143.54 7135.52 5850.20 5650.56 58
hgnet44.61 6538.48 6148.69 6057.33 5660.08 5537.17 5339.80 7228.65 78
DPSNet44.61 6538.48 6148.69 6057.33 5660.08 5537.17 5339.80 7228.65 78
SGNet44.49 6743.48 5345.16 6942.20 7543.60 7037.53 5049.43 5949.69 59
PSD-MVSNet44.35 6843.10 5545.19 6842.64 7343.61 6937.46 5148.74 6049.32 60
MVSNet + Gipuma44.11 6938.76 6047.67 6350.05 6545.26 6634.93 6242.60 6847.71 61
F/T MVSNet+Gipuma43.59 7037.31 6547.78 6250.12 6345.64 6534.25 6340.37 6947.60 62
PMVScopyleft53.92 2242.74 7126.50 7753.57 5754.03 5968.05 5136.35 5616.65 8238.64 69
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
firsttry41.11 7236.23 6844.36 7043.82 7043.77 6832.28 6440.18 7145.51 65
QQQNet40.79 7340.37 5941.07 7442.37 7443.46 7237.29 5243.44 6437.38 72
example40.49 7430.54 7647.11 6458.75 5358.88 5729.97 6631.12 7723.71 82
metmvs_fine39.41 7548.46 4433.38 7935.53 8031.67 8239.28 4657.64 4532.95 75
CCVNet38.86 7631.75 7443.59 7244.13 6949.27 6325.48 7338.01 7437.38 72
SVVNet37.41 7736.23 6838.19 7637.28 7638.77 7629.02 6743.44 6438.53 70
ternet37.41 7736.23 6838.19 7637.28 7638.77 7629.02 6743.44 6438.53 70
unMVSmet32.09 7925.89 7836.22 7836.96 7842.02 7423.79 7727.99 7829.68 77
test_1120copyleft27.94 8022.22 8131.76 8020.92 8437.22 7918.54 8325.90 8037.14 74
confMetMVS26.62 8123.72 8028.55 8228.79 8133.00 8121.33 7826.10 7923.86 81
Cas-MVS_preliminary25.33 8219.98 8228.90 8124.65 8234.26 8018.62 8221.35 8127.78 80
CMPMVSbinary62.92 218.44 830.22 8413.92 834.26 8537.50 780.45 860.00 840.00 85
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011
FADENet1.48 841.79 831.27 852.11 861.06 852.23 851.34 830.65 84
dnet0.00 850.00 850.00 860.00 870.00 860.00 870.00 840.00 85
test_MVS27.53 71
test_robustmvs13.65 8422.17 8312.51 8419.49 816.28 83
UnsupFinetunedMVSNet50.12 63