This table lists the benchmark results for the low-res two-view scenario. This benchmark evaluates the Middlebury stereo metrics:

The mask determines whether the metric is evaluated for all pixels with ground truth, or only for pixels which are visible in both images (non-occluded).
The coverage selector allows to limit the table to results for all pixels (dense), or a given minimum fraction of pixels.

Click one or more dataset result cells or column headers to show visualizations. Most visualizations are only available for training datasets. The visualizations may not work with mobile browsers.




Method Infoalllakes. 1llakes. 1ssand box 1lsand box 1sstora. room 1lstora. room 1sstora. room 2lstora. room 2sstora. room 2 1lstora. room 2 1sstora. room 2 2lstora. room 2 2sstora. room 3lstora. room 3stunnel 1ltunnel 1stunnel 2ltunnel 2stunnel 3ltunnel 3s
sorted bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort by
DN-CSS_ROBtwo views1.91
1
1.55
11
11.13
5
1.82
11
0.88
5
3.99
7
0.61
1
4.01
4
1.67
1
1.64
1
1.89
3
1.39
1
1.06
1
3.23
2
1.50
1
0.31
1
0.19
1
0.44
3
0.29
2
0.37
2
0.31
2
LaLa_ROBtwo views1.95
2
1.48
10
3.11
1
1.05
1
0.98
6
3.99
7
1.82
5
4.44
10
2.28
3
3.31
5
1.85
2
2.76
5
1.67
2
4.04
3
3.23
5
0.46
6
0.41
6
0.55
8
0.52
9
0.60
7
0.57
7
PWCDC_ROBbinarytwo views2.26
3
2.25
15
9.78
4
2.19
16
1.56
10
3.74
4
0.90
2
4.01
4
2.27
2
1.99
3
2.34
5
2.19
2
2.07
4
2.29
1
3.17
4
0.75
11
0.41
6
1.13
15
0.42
7
0.88
12
0.80
12
PWC_ROBbinarytwo views2.69
4
2.01
14
8.36
3
1.33
6
1.90
14
3.23
1
1.18
3
4.33
8
3.25
8
3.11
4
3.41
7
2.71
3
2.18
5
9.15
7
4.48
6
0.56
8
0.31
3
0.61
10
0.36
4
0.69
10
0.74
11
PSMNet_ROBtwo views2.95
5
1.38
8
18.46
10
1.19
3
1.20
8
3.80
5
1.65
4
3.54
1
2.35
4
1.98
2
1.71
1
2.75
4
1.79
3
11.43
11
2.78
3
0.42
3
0.36
4
0.50
6
0.60
11
0.59
6
0.49
4
SGM-Foresttwo views2.96
6
0.62
1
3.49
2
1.09
2
0.80
2
4.06
9
3.30
9
4.37
9
3.09
7
4.18
9
3.27
6
4.67
7
3.86
9
11.69
13
7.91
10
0.51
7
0.46
9
0.49
5
0.39
6
0.48
3
0.49
4
CBMVpermissivetwo views3.32
7
0.96
4
21.16
14
1.32
5
0.83
3
3.81
6
3.91
13
4.47
11
2.53
5
3.34
6
2.24
4
4.45
6
2.58
6
10.07
9
1.96
2
0.45
5
0.43
8
0.48
4
0.38
5
0.52
5
0.53
6
Konstantinos Batsos, Changjiang Cai, Philippos Mordohai: CBMV: A Coalesced Bidirectional Matching Volume for Disparity Estimation. Computer Vision and Pattern Recognition (CVPR) 2018
SPS-STEREOcopylefttwo views3.97
8
1.44
9
20.85
13
1.72
10
1.71
12
3.24
2
2.30
8
3.55
2
2.68
6
4.79
10
3.66
8
5.44
8
2.97
7
9.11
6
9.35
13
1.15
15
1.10
13
1.06
14
1.02
15
1.16
14
1.18
13
K. Yamaguchi, D. McAllester, R. Urtasun: Efficient Joint Segmentation, Occlusion Labeling, Stereo and Flow Estimation. ECCV 2014
SGM_ROBbinarytwo views4.57
9
0.72
2
17.46
9
1.84
12
0.58
1
4.23
11
3.45
10
4.66
13
3.66
10
7.99
12
8.82
13
7.56
10
8.74
13
12.30
15
6.84
8
0.43
4
0.38
5
0.39
2
0.35
3
0.51
4
0.48
3
Heiko Hirschmueller: Stereo processing by semiglobal matching and mutual information. TPAMI 2008, Volume 30(2), pp. 328-341
WCMA_ROBtwo views5.12
10
0.90
3
21.33
15
1.50
8
1.33
9
4.09
10
3.49
11
4.03
6
4.59
12
6.57
11
10.95
14
12.92
15
8.41
12
9.86
8
8.78
12
0.75
11
0.58
10
0.51
7
0.50
8
0.68
9
0.57
7
MeshStereopermissivetwo views5.35
11
1.32
7
21.34
16
1.34
7
1.15
7
4.40
12
3.90
12
5.18
15
4.55
11
10.14
15
11.82
15
11.47
13
5.84
11
13.84
16
7.03
9
0.61
9
0.58
10
0.60
9
0.58
10
0.72
11
0.59
9
C. Zhang, Z. Li, Y. Cheng, R. Cai, H. Chao, Y. Rui: MeshStereo: A Global Stereo Model with Mesh Alignment Regularization for View Interpolation. ICCV 2015
pmcnntwo views5.45
12
1.22
6
17.40
8
1.60
9
2.41
17
3.48
3
2.12
7
4.52
12
3.26
9
3.67
7
13.15
16
22.10
16
3.46
8
11.92
14
17.16
17
0.34
2
0.21
2
0.23
1
0.19
1
0.36
1
0.28
1
MSMD_ROBtwo views5.61
13
1.08
5
16.68
7
1.21
4
0.83
3
10.14
15
2.07
6
3.93
3
6.22
15
10.37
16
8.03
11
22.22
17
9.45
14
11.66
12
4.52
7
0.66
10
0.58
10
0.66
11
0.68
12
0.65
8
0.63
10
ELAS_ROBcopylefttwo views5.72
14
1.64
13
20.40
12
2.09
15
2.01
15
4.95
13
11.47
16
4.66
13
6.31
16
8.09
13
8.01
10
8.06
12
11.65
16
8.36
5
10.22
14
0.99
14
1.17
14
0.81
13
0.76
13
1.14
13
1.58
15
A. Geiger, M. Roser, R. Urtasun: Efficient large-scale stereo matching. ACCV 2010
PWCKtwo views5.85
15
9.95
17
15.22
6
2.90
17
1.56
10
5.66
14
7.69
14
5.27
17
4.74
13
3.88
8
7.51
9
8.04
11
5.28
10
15.18
17
8.40
11
3.08
17
1.51
17
2.24
17
1.37
17
5.55
17
1.96
17
ELAScopylefttwo views5.99
16
1.62
12
20.29
11
1.96
13
2.01
15
15.09
17
8.38
15
5.23
16
5.54
14
9.02
14
8.61
12
6.74
9
9.75
15
7.48
4
11.40
15
0.96
13
1.18
15
0.72
12
0.93
14
1.30
16
1.67
16
A. Geiger, M. Roser, R. Urtasun: Efficient large-scale stereo matching. ACCV 2010
SGM+DAISYtwo views8.36
17
3.82
16
24.83
17
2.02
14
1.75
13
14.80
16
17.22
17
4.27
7
11.79
17
13.88
17
14.38
17
11.74
14
13.83
17
10.56
10
15.12
16
1.21
16
1.19
16
1.14
16
1.13
16
1.17
15
1.37
14
AVERAGE_ROBtwo views44.46
18
45.91
18
46.59
18
40.59
19
38.66
18
32.99
18
30.52
18
46.19
18
43.26
18
49.59
18
48.97
18
44.62
18
39.89
18
45.04
18
43.94
18
48.74
18
50.69
18
50.80
18
49.79
18
45.54
18
46.97
18
MEDIAN_ROBtwo views47.37
19
49.08
19
49.92
19
40.32
18
39.97
19
34.95
19
33.43
19
49.17
19
46.39
19
53.06
19
52.65
19
47.65
19
42.23
19
48.35
19
47.29
19
52.10
19
54.04
19
54.28
19
53.48
19
48.76
19
50.33
19