--- license: cc-by-nc-sa-4.0 pretty_name: Overhead Traffic Anomalies (OTA) language: en tags: - traffic - computer-vision - anomaly-detection - surveillance - trajectories - ranking - ndcg task_categories: - video-classification - time-series-forecasting - object-detection --- # Overhead Traffic Anomalies (OTA) - **Developed by:** H. Lichtenberg, Starwit Technologies GmbH, 2025 - **License:** Creative Commons Attribution Non Commercial Share Alike 4.0 - **Master Thesis:** H. Lichtenberg, *Anomaly Detection in Traffic Applications: A Probabilistic Forecasting Approach Based on Object Tracking*, 2025 This dataset contains traffic anomalies from **static overhead cameras** (intersections/roundabouts). It includes **32 anomaly categories** with a **relevance mapping** for severity-aware evaluation. For each camera, we provide video frames with YOLOv8 detections, including 24 h of training footage and 48 h of test footage with **1027 anomaly annotations**. Please see master thesis for details. *This project was made possible by the city of Carmel, Indiana. Thanks to the mayor and the citizens of Carmel for supporting scientific research. Special thanks go the city's IT department for technical support as well as helping to understand, how traffic in Carmel works :) To find out more about Carmel look here: https://www.carmel.in.gov/* ## What’s inside - **Three camera scenes** (each with `traindata` and `testdata`). - **Frames** (320×180) as WebDataset `.tar` shards (`frames-xxxxx.tar`) plus a global `index.csv`. - **Object detections** (YOLOv8) with tracking IDs and geo-coordinates in `object_detections.json`. - **Anomaly annotations** in `anomaly-labels.csv` (test split only). - **Label dictionary** in `event_labels.txt` and **relevance mapping** in `relevance_mapping.json`. #### Anomaly labels - `label ∈ {-1, 0, 1, …, 32}` - `-1` = marks detection/tracking artifacts so they don’t bias evaluation (e.g., spurious box, ID switch) - `1…32` = anomaly categories (see `event_labels.txt`) - unlabeled trajectories are treated as normal (label `0`) - **Relevance degrees** (0–4) for severity-aware evaluation in `relevance_mapping.json`. The category→relevance mapping is subjective and may be adapted to match different application priorities. - `0`: FP/uninteresting - `1`: rather uninteresting anomaly - `2`: relevant anomaly - `3`: high relevance - `4`: critical relevance (dangerous behavior) #### Example anomaly categories - **Wrong-way driving** *(IDs: 22, 23)* — vehicle travels against permitted direction - **Fast driving** *(ID: 11)* — reckless speeding relative to scene context - **Traffic tie-up** *(ID: 16)* — blockage or standstill due to congestion/obstruction - **Cutting off another vehicle** *(IDs: 18, 19)* — failing to yield / forcing an agent to brake - **Broken-down vehicle** *(ID: 25)* — stationary/disabled vehicle on a public road. - **Getting off the road** *(IDs: 28, 29)* — getting off the roadway / parking on sidewalk

Wrong-way driving
Wrong-way driving (23)
Cutting off (almost collision)
Cutting off (almost collision) (19)
Parking on sidewalk
Parking on sidewalk (29)
Broken-down vehicle
Broken-down vehicle moved by people (25)

#### File formats - **Frames**: WebDataset shards `frames-*.tar` with JPEGs named `frame__.jpg`. - **index.csv**: maps each frame to its `timestamp` and `shard`, enabling alignment. - **object_detections.json** (per scene/split): array of per-timestamp records with `timestamp` (UNIX ms), `frame_index`, `frame_key` (JPEG filename), `shard` (tar file), and a `detections` list. Each detection has `class_id` (YOLOv8), `object_id` (stable track ID), `longitude`/`latitude`, `boundingbox` normalized to `[0,1]`, and `confidence`. The `object_id` persists across frames, enabling trajectory-level analyses. - **anomaly-labels.csv** (test only): CSV with columns `object_id,start_timestamp,end_timestamp,label`. Contains labels for true anomalies (1–32) and input errors (−1). ## Intended use - **Tasks:** anomaly detection; severity-aware ranking; robustness to detection/tracking noise. - **Metrics:** NDCG (severity-aware ranking); AU-PR, AU-ROC. ## Quick start Some download possibilities. ```python # Full dataset (all scenes + train/test + mappings) from huggingface_hub import snapshot_download snapshot_download( "HannaLicht/overhead-traffic-anomalies", repo_type="dataset", local_dir="OTA" ) # Only test data (with anomaly labels) snapshot_download( "HannaLicht/overhead-traffic-anomalies", repo_type="dataset", allow_patterns=[ "MononElmStreetNB/testdata/**", "RangelineS116thSt/testdata/**", "RangelineSMedicalDr/testdata/**", "event_labels.txt", "relevance_mapping.json", ], local_dir="OTA-test-only" ) ``` You can align frames, detections, and labels using `index.csv` timestamps and `object_id`s. ```python import pandas as pd # Example: load test annotations for a scene root = "MononElmStreetNB/testdata" labels = pd.read_csv(f"{root}/anomaly-labels.csv") # object_id, start_timestamp, end_timestamp, label index_df = pd.read_csv(f"{root}/index.csv") # frame_key, timestamp, shard # Get all frames within an anomaly interval (timestamps are UNIX ms) def frames_for_interval(ts_start, ts_end, index_df): return index_df[(index_df["timestamp_utc_ms"] >= ts_start) & (index_df["timestamp_utc_ms"] <= ts_end)] rows = frames_for_interval(labels.iloc[0].start_timestamp, labels.iloc[0].end_timestamp, index_df) print(rows.head()) ``` If you prefer streaming frames from `.tar` shards, consider the [webdataset](https://github.com/webdataset/webdataset) library. ## License & Attribution Data © 2025 Starwit Technologies GmbH - Licensed under [CC BY NC SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). **Please cite:** > H. Lichtenberg, *Overhead Traffic Anomalies (OTA)*, Starwit Technologies GmbH, 2025 **Acknowledgments:** Created by **Starwit Technologies GmbH**. Detections use YOLOv8; geo-mapping via the **Starwit Awareness Engine**. Data was provided by the city of **Carmel, Indiana**