Abstract
High-quality labeled data is essential for training robust machine learning models, yet obtaining annotations at scale remains expensive. AI-assisted annotation has therefore become standard in large-scale labeling workflows. However, in tasks where model predictions carry two independent components, a class label and spatial boundaries, a model may classify an object with high confidence while mislocalizing it. Existing AI-assisted workflows offer annotators no signal about where spatial errors are most likely. Without such guidance, humans may systematically underinspect subtly misplaced boxes.
We address this by studying the effect of visualizing spatial uncertainty via a purpose-built interface. In a controlled study with 120 participants, those receiving uncertainty cues achieve higher label quality while being faster overall. A box-level analysis confirms that the cues redirect annotator effort toward high-uncertainty predictions and away from well-localized boxes. These findings establish localization uncertainty as a lever to improve human-in-the-loop annotation.
Interactive viewer
Step through the 97 KITTI images used in the experiment and compare the three annotation layers reported in the paper. Toggle layers on or off, jump between difficulty bins, or use the ← / → keys to navigate.
What am I looking at?
- Original KITTI labels are the boxes that ship with the dataset. They are sometimes imprecise, missing for occluded/distant objects, or mis-labelled.
- Detector predictions come from the probabilistic EfficientDet-D0 model. Toggle “Colour by uncertainty” to map the per-coordinate aleatoric localization uncertainty onto the box border (blue = certain, red = uncertain).
- Re-annotated ground truth is the gold standard the three authors produced after independently re-labelling thirds of the experimental pool. It corrects 41 erroneous labels and adds 430 missed annotations.
Paper
A pre-print PDF will be linked here once available. In the meantime please use the BibTeX entry below.
@unpublished{KaSbHoSpNaSa2026,
title = {From Model Uncertainty to Human Attention:
Localization-Aware Visual Cues for Scalable Annotation Review},
author = {Kassem Sbeyti, Moussa and Holstein, Joshua and Spitzer, Philipp
and Klein, Nadja and Satzger, Gerhard},
note = {Manuscript under review},
year = {2026}
}
Data & code
-
Labeling tool — a Streamlit app that participants
used in the study, with a
relabel_*mode used by the three authors to produce the re-annotated ground truth. View on GitHub → -
Detector predictions with per-coordinate aleatoric
localization uncertainty (
data/detector_predictions.txt). -
Re-annotated ground truth covering the experimental
image pool (
data/relabeled_ground_truth/). -
Difficulty bins stratifying the candidate pool by
mean per-image localization uncertainty
(
data/{low,mid,high}_uncertainty.txt).
The KITTI subset shipped with this repository is redistributed under
the CC BY-NC-SA 3.0
license. For commercial use, please obtain the data directly from the
KITTI website. See
LICENSE-DATA.md in the repository for the full attribution.