This table lists the benchmark results for the low-res many-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Methods with suffix _ROB may participate in the Robust Vision Challenge.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.




Method Infoalllow-res
many-view
indooroutdoorlakesidesand boxstorage roomstorage room 2tunnel
sort bysort bysort bysort bysorted bysort bysort bysort bysort by
CasMVSNet(SR_B)82.29 1164.88 2993.90 197.33 193.46 252.57 3377.19 2690.91 2
AttMVS82.46 1075.33 1287.21 1196.89 275.50 3265.76 1484.90 889.24 7
MVSNet83.40 773.23 1690.18 396.40 396.26 160.66 1985.80 777.89 38
test_112485.71 389.49 283.20 1993.00 469.67 4390.66 188.31 486.91 14
test_1120copyleft85.87 290.81 182.58 2492.77 571.70 3990.04 291.57 183.26 25
3Dnovator92.54 387.14 182.53 490.22 292.38 687.90 776.07 388.99 390.37 3
LTVRE_ROB93.87 184.10 576.06 1089.46 592.03 789.46 374.68 577.44 2586.88 16
Andreas Kuhn, Heiko Hirschmüller, Daniel Scharstein, Helmut Mayer: A TV Prior for High-Quality Scalable Multi-View Stereo Reconstruction. International Journal of Computer Vision 2016
tm-dncc80.42 1768.45 2388.41 791.55 883.34 1758.49 2478.40 2490.33 4
tmmvs83.24 876.41 987.80 991.55 883.34 1770.47 882.35 1488.50 11
Pnet-new-83.71 682.83 384.29 1791.11 1071.44 4175.11 490.55 290.33 4
3Dnovator+92.74 285.26 479.20 589.30 691.06 1185.88 1271.70 786.71 590.95 1
test_112681.50 1376.94 784.53 1690.07 1280.09 2272.16 681.72 1583.44 24
mvs_zhu_103081.21 1570.41 2088.40 889.88 1388.12 556.56 2884.27 1187.20 13
PMVScopyleft87.21 1468.59 3837.06 5789.61 489.58 1488.93 443.78 4530.34 6890.31 6
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
P-MVSNet77.22 2469.44 2182.40 2588.76 1572.48 3759.92 2178.95 2285.96 19
OpenMVScopyleft89.45 882.68 976.80 886.60 1387.53 1685.75 1367.54 1086.07 686.53 18
TAPA-MVS(SR)79.05 2176.98 680.43 2986.95 1772.03 3869.38 984.57 982.32 26
DeepPCF-MVS90.46 681.47 1472.84 1787.22 1086.87 1885.94 1164.61 1781.08 1888.86 9
DeepC-MVS_fast89.96 780.90 1671.96 1886.87 1286.16 1985.65 1463.68 1880.23 1988.79 10
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
OpenMVS_ROBcopyleft85.12 1674.56 2871.88 1976.34 3686.01 2062.96 4765.18 1678.58 2380.05 32
Pnet-blend++79.10 1973.55 1482.79 2285.61 2187.39 865.83 1281.27 1675.38 40
Pnet-blend79.10 1973.55 1482.79 2285.61 2187.39 865.83 1281.27 1675.38 40
DeepC-MVS91.39 481.68 1275.54 1185.78 1485.54 2384.87 1566.52 1184.56 1086.91 14
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
LPCS77.52 2266.30 2785.00 1584.90 2483.49 1659.63 2272.97 3486.62 17
PCF-MVS84.52 1773.49 3062.30 3680.95 2884.42 2578.59 2455.89 3068.72 4579.83 33
Andreas Kuhn, Shan Lin, Oliver Erdler: Plane Completion and Filtering for Multi-View Stereo Reconstruction. GCPR 2019
TAPA-MVS88.58 1079.17 1874.71 1382.14 2683.86 2680.69 2165.33 1584.10 1281.86 27
Andrea Romanoni, Matteo Matteucci: TAPA-MVS: Textureless-Aware PAtchMatch Multi-View Stereo. ICCV 2019
GSE75.15 2662.89 3583.32 1883.05 2781.11 2055.75 3170.03 4285.80 20
vp_mvsnet66.63 4050.71 4777.24 3580.40 2874.20 3417.94 7483.49 1377.11 39
CasMVSNet(base)74.99 2762.90 3483.05 2179.99 2988.10 649.26 3876.54 2781.07 30
COLMAP(base)75.61 2566.44 2681.73 2778.81 3078.86 2358.69 2374.19 3087.52 12
HY-MVS82.50 1872.16 3366.92 2575.64 3778.74 3169.74 4260.03 2073.81 3178.45 37
COLMAP_ROBcopyleft91.06 577.39 2368.75 2283.16 2077.98 3282.59 1957.72 2679.78 2088.89 8
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
MVSNet_plusplus53.83 5341.31 5062.18 5377.12 3327.65 6311.93 8070.69 4081.76 29
COLMAP(SR)73.18 3264.70 3078.83 3176.92 3475.11 3358.32 2571.09 3984.45 22
ACMH88.36 1268.62 3763.35 3372.13 4276.30 3561.45 5052.32 3474.39 2978.63 36
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
ACMH+88.43 1169.12 3664.25 3172.37 4075.42 3662.09 4855.28 3273.23 3379.59 35
CasMVSNet(SR_A)73.59 2963.86 3280.08 3073.85 3786.72 1051.68 3676.04 2879.66 34
ACMM88.83 970.91 3460.87 3777.60 3373.74 3878.44 2550.09 3771.64 3680.61 31
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
PLCcopyleft85.34 1573.20 3165.63 2878.25 3273.53 3977.42 2757.50 2773.77 3283.78 23
Jie Liao, Yanping Fu, Qingan Yan, Chunxia xiao: Pyramid Multi-View Stereo with Local Consistency. Pacific Graphics 2019
ANet-0.7555.72 5029.52 6573.18 3972.65 4077.24 2822.81 6236.23 6369.65 47
ANet64.16 4349.90 4873.66 3872.65 4077.24 2843.69 4656.11 4971.09 44
ACMP88.15 1370.01 3559.11 4077.27 3472.09 4277.94 2648.49 3969.73 4381.78 28
Qingshan Xu and Wenbing Tao: Planar Prior Assisted PatchMatch Multi-View Stereo. AAAI 2020
IB-MVS77.21 1963.04 4459.74 3965.23 4869.93 4362.02 4947.98 4071.51 3763.73 51
Christian Sormann, Mattia Rossi, Andreas Kuhn and Friedrich Fraundorfer: IB-MVS: An Iterative Algorithm for Deep Multi-View Stereo based on Binary Decisions. BMVC 2021
PVSNet_LR65.63 4255.66 4272.28 4169.26 4473.17 3540.76 4770.56 4174.42 42
Pnet_fast66.28 4158.61 4171.39 4367.96 4576.81 3146.02 4371.20 3869.40 48
CIDER68.12 3967.69 2468.40 4467.10 4667.39 4456.32 2979.06 2170.71 46
Qingshan Xu and Wenbing Tao: Learning Inverse Depth Regression for Multi-View Stereo with Correlation Cost Volume. AAAI 2020
test_120560.45 4651.48 4566.42 4566.80 4773.02 3646.79 4256.17 4859.45 54
R-MVSNet62.92 4559.84 3864.98 4966.63 4867.11 4647.38 4172.30 3561.19 52
MVSCRF58.45 4950.89 4663.49 5165.53 4967.21 4532.48 5269.31 4457.74 55
BP-MVSNet54.95 5244.08 4962.20 5264.58 5055.08 5435.81 4952.35 5266.94 49
Christian Sormann, Patrick Knöbelreiter, Andreas Kuhn, Mattia Rossi, Thomas Pock, Friedrich Fraundorfer: BP-MVSNet: Belief-Propagation-Layers for Multi-View-Stereo. 3DV 2020
PVSNet_070.34 2160.14 4851.55 4465.87 4764.30 5159.84 5145.76 4457.35 4773.47 43
PVSNet76.22 2060.21 4755.24 4363.53 5062.99 5256.76 5251.86 3558.62 4670.83 45
A-TVSNet + Gipumacopyleft55.66 5139.87 5166.19 4661.25 5371.47 4039.48 4840.27 6165.86 50
Pnet-eth41.50 5537.56 5444.12 5659.98 5435.12 6025.76 5949.36 5337.26 64
MVS_test_141.26 5633.59 6046.37 5557.31 5536.32 5814.34 7952.84 5145.48 57
unsupervisedMVS_cas45.33 5435.07 5852.17 5449.98 5655.86 5326.75 5843.39 5650.68 56
CMPMVSbinary68.83 2224.56 750.66 8340.49 5944.64 5776.82 301.32 850.00 840.00 85
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011
Snet35.89 6038.02 5334.46 6442.09 5825.34 7131.63 5344.41 5535.96 67
F/T MVSNet+Gipuma37.77 5738.70 5237.15 6141.21 5926.31 6434.11 5043.29 5743.92 58
UnsupFinetunedMVSNet41.21 59
MVSNet_++36.74 5829.99 6441.25 5741.08 6121.58 764.20 8455.79 5061.07 53
example22.96 7715.83 8227.71 7139.92 6234.87 6116.64 7715.02 828.35 82
MVSNet + Gipuma36.49 5937.36 5535.92 6339.51 6325.68 7033.78 5140.94 5942.57 61
hgnet28.77 6528.46 6728.98 6737.72 6436.88 5622.34 6334.57 6412.35 80
DPSNet28.77 6528.46 6728.98 6737.72 6436.88 5622.34 6334.57 6412.35 80
test_mvsss29.61 6419.32 7936.47 6233.95 6644.00 5510.69 8127.94 7231.45 70
CCVNet31.72 6323.36 7637.29 6032.94 6735.83 5923.09 6123.62 7943.10 59
CPR_FA31.79 6233.68 5930.53 6526.52 6827.95 6226.79 5740.57 6037.13 65
Cas-MVS_preliminary34.22 6123.72 7241.22 5824.86 6913.53 8121.84 6625.60 7785.27 21
firsttry22.43 7918.70 8024.92 7524.56 7021.07 7717.26 7520.14 8029.14 71
A1Net28.59 6731.33 6226.77 7323.32 7120.44 7925.11 6037.56 6236.53 66
TVSNet26.87 7026.31 6927.24 7222.04 7225.97 6621.82 6730.81 6733.71 68
unMVSmet27.29 6837.09 5620.76 7921.57 7319.92 8029.37 5444.81 5420.79 75
test325.90 7425.56 7026.12 7420.86 7425.97 6621.87 6529.25 7131.54 69
QQQNet27.27 6923.68 7329.66 6620.58 7525.28 7220.44 7026.93 7343.10 59
RMVSNet22.89 7828.83 6618.93 8020.40 7621.96 7528.10 5629.56 6914.42 79
SGNet23.30 7622.41 7723.89 7619.97 7724.65 7318.80 7326.01 7627.05 72
PSD-MVSNet22.04 8020.75 7822.90 7819.67 7823.59 7417.26 7524.24 7825.46 74
SVVNet26.60 7123.52 7428.65 6919.11 7925.79 6820.11 7126.93 7341.05 62
ternet26.60 7123.52 7428.65 6919.11 7925.79 6820.11 7126.93 7341.05 62
unMVSv117.38 8316.06 8118.27 8119.03 8120.72 7815.92 7816.19 8115.05 77
confMetMVS20.09 8225.00 7116.82 8218.50 8212.38 8220.47 6929.53 7019.57 76
MVEpermissive59.87 2326.31 7331.05 6323.15 7716.99 8326.25 6528.39 5533.71 6626.21 73
Simon Fuhrmann, Fabian Langguth, Michael Goesele: MVE - A Multi-View Reconstruction Environment. EUROGRAPHICS Workshops on Graphics and Cultural Heritage (2014)
metmvs_fine20.53 8132.00 6112.88 8312.30 8411.56 8321.14 6842.86 5814.77 78
test_robustmvs4.01 846.76 853.49 846.41 831.77 83
FADENet0.40 840.50 840.33 850.58 860.22 850.62 860.38 830.18 84
dnet0.00 850.00 850.00 860.00 870.00 860.00 870.00 840.00 85
test_MVS10.18 82