Datasets:
license: apache-2.0
task_categories:
- question-answering
size_categories:
- 1K<n<10K
configs:
- config_name: SPBench-SI
data_files:
- split: test
path: SPBench-SI.parquet
- config_name: SPBench-MV
data_files:
- split: test
path: SPBench-MV.parquet
Spatial Perception and Reasoning Benchmark (SPBench)
This repository contains the Spatial Perception and Reasoning Benchmark (SPBench), introduced in SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models.
Dataset Description
SPBench is a comprehensive evaluation suite designed to assess the spatial perception and reasoning capabilities of Vision-Language Models (VLMs). SPBench consists of two complementary benchmarks, SPBench-SI and SPBench-MV, corresponding to single-image and multi-view modalities, respectively. Both benchmarks are constructed using the standardized pipeline applied to the ScanNet validation set, ensuring systematic coverage across diverse spatial reasoning tasks.
- SPBench-SI serves as a single-image evaluation benchmark that measures models’ ability to perform spatial understanding and reasoning from individual viewpoints. It encompasses four task categories—absolute distance, object size, relative distance, and relative direction, with a total of 1,009 samples.
- SPBench-MV focuses on multi-view spatial reasoning, requiring models to jointly model spatial relationships across multiple viewpoints. It further includes object counting tasks to evaluate models’ capability in identifying and enumerating objects within multi-view scenarios, with a total of 319 samples.
Both benchmarks undergo rigorous quality control through a combination of standardized pipeline filtering strategies and manual curation, ensuring data disambiguation and high-quality annotations suitable for reliable evaluation.
Usage
You can directly load the dataset from Hugging Face using the datasets library.
SPBench can be accessed in three different configurations as follows:
from datasets import load_dataset
# Load the two benchmarks directly
dataset = load_dataset("hongxingli/SPBench")
# Load the SPBench-SI only
dataset = load_dataset("hongxingli/SPBench", name="SPBench-SI")
# Load the SPBench-MV only
dataset = load_dataset("hongxingli/SPBench", name="SPBench-MV")
The corresponding image resources required for the benchmarks are provided in SPBench-SI-images.zip
and SPBench-MV-images.zip, which contain the complete image sets for SPBench-SI and SPBench-MV, respectively.
Evaluation
SPBench evaluates performance using two metrics: for multiple-choice questions, we use Accuracy, calculated based on exact matches. For numerical questions, we use MRA (Mean Relative Accuracy) introduced by Thinking in Space, to assess how closely model predictions align with ground truth values.
The evaluation code and usage guidelines are available in our GitHub repository. For comprehensive details, please refer to our paper and the repository documentation.
Citation
@misc{li2025spatialladderprogressivetrainingspatial,
title={SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models},
author={Hongxing Li and Dingming Li and Zixuan Wang and Yuchen Yan and Hang Wu and Wenqi Zhang and Yongliang Shen and Weiming Lu and Jun Xiao and Yueting Zhuang},
year={2025},
eprint={2510.08531},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.08531},
}