The following metrics are evaluated (for these metrics, lower is better):

**ATE RMSE [cm]:** Absolute trajectory root-mean-square-error in centimeters. This is the RMSE of the estimated pose translations vs. the ground truth pose translations.
**Rel. translation [%] for different evaluation thresholds ***t*: Relative trajectory translation error in percent. This is the average percentual translation error after proceeding *t* meters.
**Rel. rotation [deg/m] for different evaluation thresholds ***t*: Relative trajectory rotation error in degrees per meter. This is the average rotation error per meter after proceeding *t* meters.
**Time [s]:** The runtime of the method.

The error metrics are evaluated for different ways of aligning the estimated trajectory and the ground truth trajectory:

**SE(3) alignment:** The trajectories are aligned with a rigid-body transformation. SLAM methods that do not return results in metric scale (such as most monocular SLAM methods) are excluded from these evaluations.
**Sim(3) alignment**: The trajectories are aligned with a similarity transformation.

The graph on the top is a cumulative error visualization for all datasets. For a given error threshold on the x axis, the plots show for how many datasets a method achieves a lower error.
The table on the bottom gives the error values for each dataset. Results larger than 10 (centimeters, degrees per meter, ...) are shown as "fail".

The method ranking is determined as the area-under-curve measure for the graph on the top (up to the selected maximum error).
In contrast to the other metrics, for this metric, larger is better.

Please note that every choice of a single error metric implies some kind of weighting between different quality aspects:
Robustness (on how many datasets a SLAM method works) and accuracy (how well it works on those datasets where it does not fail).
The chosen metric weights robustness highly, since having an additional result on a dataset usually adds much more area under the curve than slightly improving the results on many datasets.

Click a dataset result cell in the table to open a 3D visualization. Ground truth visualization is only available for training datasets.