This table lists the benchmark results for the low-res many-view scenario.
The following metrics are evaluated:
Accuracy [%]: The fraction of the reconstruction which is closer to the ground truth than the evaluation threshold distance (*). Larger is better.
Completeness [%]: The fraction of the ground truth which is closer to the reconstruction than the evaluation threshold distance (*). Larger is better.
F1 score [%]: The harmonic mean of accuracy and completeness, used to rank methods based on both metrics. Larger is better.
Time [s]: The runtime of the method. Smaller is better.
(*) For exact definitions, detailing how potentially incomplete ground truth is taken into account, see our paper.
The datasets are grouped into different categories, and result averages are computed for a category and method if results of the method are available for all datasets within the category.
Note that the category "all" includes both the high-res multi-view and the low-res many-view scenarios.
Click a dataset result cell to show a visualization of the reconstruction. For training datasets, ground truth and accuracy / completeness visualizations are also available. The visualizations may not work with mobile browsers.