File size: 18,264 Bytes
00418b5 edc5aa9 00418b5 edc5aa9 00418b5 75a4ed8 00418b5 75a4ed8 00418b5 75a4ed8 00418b5 75a4ed8 00418b5 75a4ed8 00418b5 25646dd 00418b5 25646dd e1cbe1f 25646dd 7fb6cce 25646dd 7fb6cce 25646dd e1cbe1f 25646dd bbb3261 25646dd e1cbe1f 25646dd e1cbe1f 25646dd e1cbe1f a83cdfb e1cbe1f 55a603f e1cbe1f 25646dd e1cbe1f 98b1234 25646dd 018d5eb 25646dd 6f353e1 018d5eb 25646dd 627bc14 25646dd 99b2b49 627bc14 25646dd 627bc14 25646dd c9ba189 1d8e09d c9ba189 25646dd 018d5eb a83cdfb 018d5eb 25646dd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 |
---
dataset_info:
features:
- name: sample_id
dtype: string
- name: task
dtype: string
- name: embodiments
list: string
- name: image
dtype: image
- name: segmentation_mask
list:
list: int64
- name: ground_truth
struct:
- name: Bicycle
list:
list:
list: float64
- name: Human
list:
list:
list: float64
- name: Legged Robot
list:
list:
list: float64
- name: Wheeled Robot
list:
list:
list: float64
- name: category
list: string
- name: context
dtype: string
- name: metadata
struct:
- name: city
dtype: string
- name: country
dtype: string
- name: lighting_conditions
dtype: string
- name: natural_structured
dtype: string
- name: task_type
dtype: string
- name: urban_rural
dtype: string
- name: weather_conditions
dtype: string
splits:
- name: validation
num_bytes: 6117774314.0
num_examples: 502
- name: test
num_bytes: 6123246091.0
num_examples: 500
download_size: 344928365
dataset_size: 12241020405.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
tags:
- image
- text
- navigation
pretty_name: NaviTrace
license: cc-by-4.0
task_categories:
- visual-question-answering
- robotics
language:
- en
size_categories:
- 1K<n<10K
---
<style>
.header-container {
display: flex;
flex-direction: column;
align-items: center;
}
/* Headings */
h1 {
text-align: center;
font-size: 4.5rem !important;
font-weight: 500;
margin-top: 1rem;
margin-bottom: 1rem;
}
/* Links container */
.links-container {
display: flex;
flex-wrap: wrap;
row-gap: 1rem;
justify-content: center;
text-align: center;
margin-bottom: 1.5rem;
font-size: 1.1rem;
}
.links-container a {
white-space: nowrap;
margin: 0 1rem;
text-decoration: none;
color: #3b82f6;
font-weight: 600;
transition: color 0.3s;
}
.links-container a:hover {
color: #1e3a8a;
}
/* Media Query for mobile devices */
@media (max-width: 600px) {
h1 {
font-size: 3.5rem !important; /* Adjust font size for small screens */
}
}
</style>
<div class="header-container">
<h1><b>NaviTrace</b></h1>
<div class="links-container">
<a href="https://leggedrobotics.github.io/navitrace_webpage/">
🏠 Project
</a>
<a href="https://arxiv.org/abs/2510.26909">
📄 Paper
</a>
<a href="https://github.com/leggedrobotics/navitrace_evaluation">
💻 Code
</a>
<a href="https://huggingface.co/spaces/leggedrobotics/navitrace_leaderboard">
🏆 Leaderboard
</a>
</div>
<img src="https://leggedrobotics.github.io/navitrace_webpage/static/images/Figure_1.png" alt="NaviTrace Overview" width="800px">
<div style="text-align: center;"><i><b>NaviTrace</b> is a novel VQA benchmark for VLMs that evaluates models on their embodiment-specific understanding of navigation across challenging real-world scenarios.</i></div>
</div>
## Key Features
- ✏️ **Core Task:** Given a real-world image in first-person perspective, a language instruction, and an embodiment type, models should predict a 2D navigation path in image space that solves the instruction.
- 🤖 **Embodiments:** Four embodiment types capturing distinct physical and spatial constraints (human, legged robot, wheeled robot, or bicycle).
- 📏 **Scale:** 1,002 diverse real-world scenarios and over 3,000 expert-annotated traces.
- ⚖️ **Splits:**
- Validation split (~50%) for experimentation and model fine-tuning.
- Test split (~50%) with hidden ground-truths for public leaderboard evaluation.
- 🔎 **Annotation Quality:** All images and traces manually collected and labeled by human experts.
- 🏅 **Evaluation Metric:** Semantic-aware Trace Score, combining Dynamic Time Warping distance, goal endpoint error, and embodiment-conditioned semantic penalties.
## Uses
### Run Benchmark
We provide a [notebook](https://github.com/leggedrobotics/navitrace_evaluation/blob/main/src/run_evaluation.ipynb) with example code on how to run this benchmark with an API model.
You can use this as a template to adapt to your own model.
Additionally, we host a public [leaderboard](https://huggingface.co/spaces/leggedrobotics/navitrace_leaderboard) where you can submit your model's results.
### Model Training
You can use the validation split to fine-tune models for this task.
Load the dataset with `dataset = load_dataset("leggedrobotics/NaviTrace")` and use `dataset["validation"]` for training your model.
See the next section for details on the dataset columns.
## Structure
| Column | Type | Description |
|-------------------|------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| sample_id | `str` | Unique identifier of a scenario. |
| task | `str` | Language instruction (English) solvable purely from the visual information, emphasizing cases where different embodiments behave differently, while still reflecting everyday scenarios. |
| embodiments | `List[str]` | All embodiments ("Human", "Legged Robot", "Wheeled Robot", "Bicycle") suitable for the task. |
| image | `PIL.Image` | First-person image of a real-world environment with blured faces and license plates. |
| segmentation_mask | `List[List[int]]` | Semantic segmentation mask of the image generated with the [Mask2Former model](https://huggingface.co/facebook/mask2former-swin-large-mapillary-vistas-semantic). |
| ground_truth | `dict[str, `<br>`Optional[List[`<br>`List[List[float]]`<br>`]]]` | A dict mapping an embodiment name to a sequence of 2D points in image coordinates that describes a navigation path solution. One path per suitable embodiment, and multiple paths if equally valid alternatives exist (e.g., avoiding an obstacle from the left or right). If an embodiment is not suitable for the task, the value is `None`. |
| category | `List[str]` | List with one or more categories ("Semantic Terrain", "Geometric Terrain", "Stationary Obstacle", "Dynamic Obstacle", "Accessibility", "Visibility", "Social Norms") that describe the main challenges of the navigation task. |
| context | `str` | Short description of the scene as bullet points separated with ";", including the location, ongoing activities, and key elements needed to solve the task. |
| metadata | `dict[str, str]` | Additional information about the scenario:<br>- *"country":* The image's country of origin.<br>- *"city":* The image's city of origin or "GrandTour Dataset" if the image comes from the [Grand Tour dataset](https://grand-tour.leggedrobotics.com/).<br>- *"urban_rural":* "Urban", "Rural", or "Mixed" depending on the image's setting.<br>- *"natural_structured":* "Structured", "Natural", or "Mixed" depending on the image's environment.<br>- *"lighting_conditions":* "Night", "Daylight", "Indoor Lighting", or "Low Light" depending on the image's lighting.<br>- *"weather_conditions":* "Cloudy", "Clear", "Rainy", "Unknown", "Foggy", "Snowy", or "Windy" depending on the image's weather.<br>- *"task_type":* Distinguishes between instruction styles. Goal-Directed tasks ("Goal") specify the target explicitly (e.g., “Go straight to the painting.”), while Directional tasks ("Directions") emphasize the movement leading to it (e.g., “Move forward until you see the painting.”). Since this is ambiguous sometimes, there are also mixed tasks ("Mixed"). |
## Citation
If you find this dataset helpful for your work, please cite us with:
**BibTeX:**
```
@article{Windecker2025NaviTrace,
author = {Tim Windecker and Manthan Patel and Moritz Reuss and Richard Schwarzkopf and Cesar Cadena and Rudolf Lioutikov and Marco Hutter and Jonas Frey},
title = {NaviTrace: Evaluating Embodied Navigation of Vision-Language Models},
year = {2025},
month = {October},
journal = {Preprint submitted to arXiv},
note = {Currently a preprint on arXiv (arXiv:2510.26909). Awaiting peer review and journal submission.},
doi = {},
url = {https://arxiv.org/abs/2510.26909},
eprint={2510.26909},
archivePrefix={arXiv},
primaryClass={cs.RO},
}
```
|