File size: 4,657 Bytes
f7e61fe
 
 
 
 
 
fbe0396
0613371
 
 
 
 
 
 
 
 
b2b52aa
 
d605452
b2b52aa
 
68aa965
b2b52aa
 
 
 
 
 
68aa965
b2b52aa
 
1a310c1
b2b52aa
 
 
 
 
 
 
cb02c9f
b2b52aa
 
cb02c9f
b2b52aa
 
c9ac7a9
 
 
 
cb02c9f
 
 
 
 
 
 
 
 
 
7b17b25
cb02c9f
 
7b17b25
cb02c9f
 
7b17b25
cb02c9f
 
 
 
 
 
 
 
 
 
 
 
 
 
1aedca4
 
 
 
 
 
 
 
 
cb02c9f
 
 
b2b52aa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: apache-2.0
task_categories:
- question-answering
size_categories:
- 1K<n<10K

configs:
- config_name: SPBench-SI
  data_files:
  - split: test
    path: SPBench-SI.parquet
- config_name: SPBench-MV
  data_files:
  - split: test
    path: SPBench-MV.parquet
---

<a href="https://arxiv.org/pdf/2510.08531" target="_blank">
    <img alt="arXiv" src="https://img.shields.io/badge/arXiv-SpatialLadder-red?logo=arxiv" height="20" />
</a>
<a href="https://zju-real.github.io/SpatialLadder/" target="_blank">
    <img alt="Website" src="https://img.shields.io/badge/🌎_Website-SpaitalLadder-blue.svg" height="20" />
</a>
<a href="https://github.com/ZJU-REAL/SpatialLadder" target="_blank">
    <img alt="Code" src="https://img.shields.io/badge/Code-SpaitalLadder-white?logo=github" height="20" />
</a>

<a href="https://huggingface.co/hongxingli/SpatialLadder-3B" target="_blank">
    <img alt="Model" src="https://img.shields.io/badge/%F0%9F%A4%97%20_Model-SpatialLadder--3B-ffc107?color=ffc107&logoColor=white" height="20" />
</a>
<a href="https://huggingface.co/datasets/hongxingli/SpatialLadder-26k" target="_blank">
    <img alt="Data" src="https://img.shields.io/badge/%F0%9F%A4%97%20_Data-SpatialLadder--26k-ffc107?color=ffc107&logoColor=white" height="20" />
</a>

</div>

# Spatial Perception and Reasoning Benchmark (SPBench)

This repository contains the Spatial Perception and Reasoning Benchmark (SPBench), introduced in [SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models]().

## Dataset Description

SPBench is a comprehensive evaluation suite designed to assess the spatial perception and reasoning capabilities of Vision-Language Models (VLMs). SPBench consists of two complementary benchmarks, SPBench-SI and SPBench-MV, corresponding to single-image and multi-view modalities, respectively. Both benchmarks are constructed using the standardized pipeline applied to the ScanNet validation set, ensuring systematic coverage across diverse spatial reasoning tasks.

- SPBench-SI serves as a single-image evaluation benchmark that measures models’ ability to perform spatial understanding and reasoning from individual viewpoints. It encompasses four task categories—absolute distance, object size, relative distance, and relative direction, with a total of 1,009 samples.
- SPBench-MV focuses on multi-view spatial reasoning, requiring models to jointly model spatial relationships across multiple viewpoints. It further includes object counting tasks to evaluate models’ capability in identifying and enumerating objects within multi-view scenarios, with a total of 319 samples.

Both benchmarks undergo rigorous quality control through a combination of standardized pipeline filtering strategies and manual curation, ensuring data disambiguation and high-quality annotations suitable for reliable evaluation.

## Usage

You can directly load the dataset from Hugging Face using the `datasets` library.
SPBench can be accessed in three different configurations as follows:

```python
from datasets import load_dataset

# Load the two benchmarks directly
dataset = load_dataset("hongxingli/SPBench")

# Load the SPBench-SI only
dataset = load_dataset("hongxingli/SPBench", name="SPBench-SI")

# Load the SPBench-MV only
dataset = load_dataset("hongxingli/SPBench", name="SPBench-MV")
```

The corresponding image resources required for the benchmarks are provided in `SPBench-SI-images.zip`
and `SPBench-MV-images.zip`, which contain the complete image sets for SPBench-SI and SPBench-MV, respectively.

## Evaluation

SPBench evaluates performance using two metrics: for multiple-choice questions, we use `Accuracy`, calculated based on exact matches. For numerical questions, we use `MRA (Mean Relative Accuracy)` introduced by [Thinking in Space](https://github.com/vision-x-nyu/thinking-in-space), to assess how closely model predictions align with ground truth values.

The evaluation code and usage guidelines are available in our [GitHub repository](https://github.com/ZJU-REAL/SpatialLadder). For comprehensive details, please refer to our paper and the repository documentation.

## Citation

```bibtex
@misc{li2025spatialladderprogressivetrainingspatial,
      title={SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models}, 
      author={Hongxing Li and Dingming Li and Zixuan Wang and Yuchen Yan and Hang Wu and Wenqi Zhang and Yongliang Shen and Weiming Lu and Jun Xiao and Yueting Zhuang},
      year={2025},
      eprint={2510.08531},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2510.08531}, 
}
```