This table lists the benchmark results for the low-res many-view scenario. The following metrics are evaluated:

(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.

The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category. Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.

Methods with suffix _ROB may participate in the Robust Vision Challenge.

Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.




Method Infoalllow-res
many-view
indooroutdoorlakesidesand boxstorage roomstorage room 2tunnel
sort bysorted bysort bysort bysort bysort bysort bysort bysort by
test_112497.51 196.38 198.27 899.58 296.12 2795.79 196.96 1199.10 1
test_1120copyleft97.11 296.16 297.74 1398.72 796.98 2294.90 297.43 597.50 18
3Dnovator98.27 296.99 394.47 398.67 397.83 1099.24 692.01 396.93 1298.94 4
Pnet-new-96.49 493.65 698.39 799.16 597.29 2190.27 797.02 1098.71 5
AttMVS96.22 591.52 1699.36 199.59 199.50 387.68 1595.36 2298.98 2
MVSNet96.11 692.12 1398.77 299.42 399.78 286.41 1897.84 497.10 22
HY-MVS95.94 1396.04 793.13 797.98 1097.11 1898.27 1588.06 1398.20 298.56 7
3Dnovator+97.89 395.95 892.88 897.99 996.94 1998.52 1389.88 895.89 1698.50 8
test_112695.78 992.58 1097.92 1198.89 698.35 1489.40 995.76 1996.51 31
OpenMVScopyleft96.65 795.59 1092.21 1197.84 1296.27 2298.97 888.62 1095.80 1898.29 9
LTVRE_ROB98.40 195.34 1192.18 1297.44 1797.15 1798.03 1791.46 692.90 3297.14 21
Andreas Kuhn, Heiko Hirschmüller, Daniel Scharstein, Helmut Mayer: A TV Prior for High-Quality Scalable Multi-View Stereo Reconstruction. International Journal of Computer Vision 2016
TAPA-MVS(SR)95.29 1292.68 997.03 2096.70 2196.89 2388.16 1297.21 797.50 18
tmmvs95.25 1391.86 1597.52 1597.62 1396.79 2588.50 1195.23 2398.14 12
TAPA-MVS96.21 1195.20 1492.05 1497.30 1896.93 2098.17 1687.93 1496.17 1496.81 27
Andrea Romanoni, Matteo Matteucci: TAPA-MVS: Textureless-Aware PAtchMatch Multi-View Stereo. ICCV 2019
mvs_zhu_103095.16 1590.25 1998.44 697.68 1299.49 483.22 2597.28 698.16 10
P-MVSNet94.83 1689.24 2398.55 597.36 1699.33 583.98 2294.50 2498.97 3
Pnet-blend++94.52 1794.40 494.61 3098.03 898.91 991.73 497.06 886.88 54
Pnet-blend94.52 1794.40 494.61 3098.03 898.91 991.73 497.06 886.88 54
DeepC-MVS97.60 494.32 1991.48 1796.21 2495.12 2695.94 3086.92 1796.03 1597.57 17
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
LPCS94.29 2089.33 2297.59 1495.31 2498.86 1286.05 1992.61 3498.60 6
OpenMVS_ROBcopyleft95.38 1494.28 2191.29 1896.27 2397.52 1594.78 3687.11 1695.47 2096.52 30
tm-dncc93.68 2287.92 2797.51 1697.62 1396.79 2583.04 2892.81 3398.14 12
DeepPCF-MVS96.93 593.60 2389.75 2196.17 2695.24 2595.47 3285.12 2094.38 2697.79 15
CasMVSNet(SR_B)93.45 2485.74 3998.59 499.20 499.92 179.66 3491.83 3896.66 29
DeepC-MVS_fast96.85 693.36 2589.11 2496.20 2594.70 2895.90 3184.83 2193.38 2898.01 14
Andreas Kuhn, Christian Sormann, Mattia Rossi, Oliver Erdler, Friedrich Fraundorfer: DeepC-MVS: Deep Confidence Prediction for Multi-View Stereo Reconstruction. 3DV 2020
GSE92.65 2686.87 3196.50 2293.80 3397.95 2083.16 2690.59 4497.74 16
PVSNet_LR91.95 2787.47 2994.93 2994.42 2995.25 3379.53 3595.42 2195.13 38
COLMAP(SR)91.87 2887.99 2694.46 3292.04 3994.57 3783.60 2492.38 3596.76 28
CasMVSNet(SR_A)91.32 2984.85 4095.63 2893.01 3598.91 978.76 3790.94 4294.97 40
CasMVSNet(base)91.29 3083.46 4396.51 2194.80 2799.23 776.50 4590.42 4595.49 37
COLMAP(base)90.88 3186.46 3393.83 3490.50 4793.89 3881.31 3091.60 3997.10 22
ACMH+96.62 990.69 3286.75 3293.32 3791.07 4292.46 4581.20 3292.31 3696.43 33
R-MVSNet90.65 3388.39 2592.16 4392.01 4091.05 4880.93 3395.85 1793.43 41
COLMAP_ROBcopyleft96.50 1090.54 3485.96 3793.59 3588.41 5394.98 3478.87 3693.05 3097.37 20
Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm: Pixelwise View Selection for Unstructured Multi-View Stereo. ECCV 2016
ACMH96.65 790.42 3585.83 3893.48 3690.64 4692.90 4178.75 3892.91 3196.89 25
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
PLCcopyleft94.65 1690.20 3686.30 3492.80 3989.29 5092.80 4281.48 2991.12 4196.30 34
Jie Liao, Yanping Fu, Qingan Yan, Chunxia xiao: Pyramid Multi-View Stereo with Local Consistency. Pacific Graphics 2019
IB-MVS91.63 1990.18 3787.85 2891.73 4694.00 3192.76 4381.28 3194.41 2588.44 51
Christian Sormann, Mattia Rossi, Andreas Kuhn and Friedrich Fraundorfer: IB-MVS: An Iterative Algorithm for Deep Multi-View Stereo based on Binary Decisions. BMVC 2021
Pnet_fast89.94 3885.97 3692.58 4192.52 3894.98 3478.33 4093.61 2790.25 47
CIDER89.94 3890.13 2089.81 4889.08 5190.51 4983.89 2396.37 1389.84 49
Qingshan Xu and Wenbing Tao: Learning Inverse Depth Regression for Multi-View Stereo with Correlation Cost Volume. AAAI 2020
ACMM96.08 1289.28 4084.14 4292.70 4088.68 5293.12 4078.33 4089.95 4996.30 34
Qingshan Xu and Wenbing Tao: Multi-Scale Geometric Consistency Guided Multi-View Stereo. CVPR 2019
PCF-MVS92.86 1889.27 4182.32 4493.90 3394.10 3092.56 4478.41 3986.23 5495.05 39
Andreas Kuhn, Shan Lin, Oliver Erdler: Plane Completion and Filtering for Multi-View Stereo Reconstruction. GCPR 2019
ANet88.09 4281.47 4992.50 4287.52 5597.97 1872.29 4890.64 4392.00 44
MVSCRF87.97 4386.10 3589.22 4985.61 5892.04 4673.48 4798.71 189.99 48
PVSNet93.40 1787.96 4487.30 3088.40 5291.26 4184.79 5383.16 2691.43 4089.14 50
ACMP95.32 1587.91 4581.54 4792.16 4386.95 5793.34 3975.21 4687.87 5396.18 36
Qingshan Xu and Wenbing Tao: Planar Prior Assisted PatchMatch Multi-View Stereo. AAAI 2020
BP-MVSNet87.27 4680.28 5191.92 4592.62 3789.82 5071.21 5089.35 5093.32 42
Christian Sormann, Patrick Knöbelreiter, Andreas Kuhn, Mattia Rossi, Thomas Pock, Friedrich Fraundorfer: BP-MVSNet: Belief-Propagation-Layers for Multi-View-Stereo. 3DV 2020
PVSNet_089.98 2187.18 4784.23 4189.14 5189.98 4985.88 5278.21 4390.25 4891.56 45
test_120587.14 4881.84 4690.68 4793.01 3596.01 2877.62 4486.06 5683.02 59
vp_mvsnet86.36 4970.17 6097.15 1997.73 1196.84 2442.33 7898.02 396.86 26
A-TVSNet + Gipumacopyleft85.81 5080.78 5089.16 5085.45 5991.39 4778.22 4283.34 5990.64 46
ANet-0.7582.50 5167.03 6292.82 3887.52 5597.97 1856.76 6377.30 6592.96 43
F/T MVSNet+Gipuma79.48 5282.20 4577.68 5790.98 4355.72 6571.27 4993.13 2986.32 56
Pnet-eth79.24 5378.85 5379.51 5693.83 3264.76 5868.57 5589.13 5279.93 61
MVSNet + Gipuma78.76 5481.53 4876.91 5890.19 4855.24 6671.20 5191.86 3785.28 57
unMVSmet75.98 5576.68 5475.52 5988.21 5463.35 6067.51 5785.84 5774.99 62
PMVScopyleft91.26 2075.40 5644.56 8295.95 2793.75 3495.97 2955.22 6733.90 8298.15 11
Y. Furukawa, J. Ponce: Accurate, dense, and robust multiview stereopsis. PAMI (2010)
MVSNet_plusplus75.31 5760.60 7485.11 5395.60 2362.72 6130.90 8290.30 4797.02 24
MVS_test_175.15 5866.26 6381.08 5590.91 4564.69 5942.15 7990.37 4687.64 53
unsupervisedMVS_cas74.92 5964.56 6681.83 5475.67 6286.23 5153.74 7175.37 6683.59 58
Cas-MVS_preliminary70.90 6065.38 6474.58 6074.23 6353.08 7052.36 7278.40 6396.45 32
confMetMVS70.86 6179.56 5265.06 6579.46 6152.64 7169.86 5389.25 5163.08 73
Snet70.24 6276.61 5566.00 6481.52 6047.72 7370.02 5283.20 6068.76 69
CPR_FA69.65 6371.00 5868.75 6366.27 7059.56 6464.16 5877.85 6480.42 60
RMVSNet65.81 6474.09 5660.29 6971.48 6653.56 6768.62 5479.56 6155.82 77
test_mvsss64.12 6549.98 7873.54 6173.37 6473.88 5534.02 8065.95 7773.36 63
CCVNet63.49 6664.42 6762.86 6663.31 7153.21 6857.30 6171.54 7472.07 64
hgnet62.55 6764.18 6861.47 6770.70 6865.58 5656.76 6371.59 7248.14 79
DPSNet62.55 6764.18 6861.47 6770.70 6865.58 5656.76 6371.59 7248.14 79
MVSNet_++61.90 6949.37 7970.26 6271.25 6751.13 7213.23 8485.51 5888.40 52
MVEpermissive83.40 2261.54 7070.38 5955.65 7144.73 8262.23 6267.58 5673.18 6860.00 75
Simon Fuhrmann, Fabian Langguth, Michael Goesele: MVE - A Multi-View Reconstruction Environment. EUROGRAPHICS Workshops on Graphics and Cultural Heritage (2014)
metmvs_fine60.40 7173.46 5751.70 7759.47 7243.63 7560.85 5986.07 5552.00 78
TVSNet58.57 7265.04 6554.25 7350.35 7543.60 7656.78 6273.30 6768.81 68
A1Net58.27 7369.56 6150.74 7948.60 7735.72 8360.70 6078.42 6267.89 70
QQQNet58.00 7463.43 7054.38 7248.14 7842.94 7953.98 6872.88 6972.07 64
SVVNet57.08 7563.37 7152.89 7544.50 8343.08 7753.86 6972.88 6971.08 66
ternet57.08 7563.37 7152.89 7544.50 8343.08 7753.86 6972.88 6971.08 66
test356.14 7763.17 7351.45 7847.91 7942.37 8055.62 6670.72 7564.07 71
example53.61 7846.43 8158.40 7072.19 6561.90 6345.83 7747.03 8141.10 82
firsttry53.54 7954.29 7753.04 7449.77 7645.66 7449.70 7458.88 7963.70 72
SGNet53.33 8058.49 7549.90 8147.35 8042.30 8150.65 7366.33 7660.04 74
PSD-MVSNet51.39 8155.47 7648.66 8247.23 8141.20 8247.60 7563.35 7857.55 76
unMVSv149.24 8247.80 8050.21 8052.05 7353.15 6947.48 7648.13 8045.41 81
CMPMVSbinary75.91 2327.01 831.74 8443.86 8351.27 7480.30 543.48 850.00 840.00 85
M. Jancosek, T. Pajdla: Multi-View Reconstruction Preserving Weakly-Supported Surfaces. CVPR 2011
FADENet2.66 842.75 832.60 854.76 861.96 853.23 862.27 831.09 84
dnet0.00 850.00 850.00 860.00 870.00 860.00 870.00 840.00 85
test_MVS32.87 81
test_robustmvs14.79 8424.07 8513.33 8422.91 836.99 83
UnsupFinetunedMVSNet90.98 43