File size: 9,192 Bytes
927370a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
484ba9b
927370a
 
 
 
 
 
 
 
 
 
d82c458
927370a
 
d82c458
927370a
 
d82c458
927370a
 
d82c458
927370a
 
d82c458
927370a
 
484ba9b
 
d82c458
484ba9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
abbe20e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
484ba9b
 
abbe20e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
484ba9b
 
abbe20e
 
484ba9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64e7294
 
484ba9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
993a168
484ba9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
927370a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
---
license: apache-2.0
task_categories:
- multiple-choice
language:
- en
- zh
tags:
- audio-visual
- omnimodality
- multi-modality
- benchmark
pretty_name: 'XModBench '
size_categories:
- 10K<n<100K
---

<h1 align="center">
XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models
</h1>

<p align="center">
  <img src="https://xingruiwang.github.io/projects/XModBench/static/images/teaser.png" width="90%" alt="XModBench teaser">
</p>

<p align="center">
  <a href="https://arxiv.org/abs/2510.15148">
    <img src="https://img.shields.io/badge/Arxiv-Paper-b31b1b.svg" alt="Paper">
  </a>
  <a href="https://xingruiwang.github.io/projects/XModBench/">
    <img src="https://img.shields.io/badge/Website-Page-0a7aca?logo=globe&logoColor=white" alt="Website">
  </a>
  <a href="https://huggingface.co/datasets/RyanWW/XModBench">
    <img src="https://img.shields.io/badge/Huggingface-Dataset-FFD21E?logo=huggingface" alt="Dataset">
  </a>
<a href="https://github.com/XingruiWang/XModBench">
  <img src="https://img.shields.io/badge/Github-Code-181717?logo=github&logoColor=white" alt="GitHub Repo">
</a>
  <a href="https://opensource.org/licenses/MIT">
    <img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
  </a>
</p>



XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models.

### Key Features

- **🎯 Multi-Modal Evaluation**: Comprehensive testing across text, vision, and audio modalities
- **🧩 5 Task Dimensions**: Perception, Spatial, Temporal, Linguistic, and Knowledge tasks
- **πŸ“Š 13 SOTA Models Evaluated**: Including Gemini 2.5 Pro, Qwen2.5-Omni, EchoInk-R1, and more
- **πŸ”„ Consistency Analysis**: Measures performance stability across different modal configurations
- **πŸ‘₯ Human Performance Baseline**: Establishes human-level benchmarks for comparison


## πŸš€ Quick Start

### Installation

```bash
# Clone the repository
git clone https://github.com/XingruiWang/XModBench.git
cd XModBench

# Install dependencies
pip install -r requirements.txt
```

## πŸ“‚ Dataset Structure

### Download and Setup

After cloning from HuggingFace, you'll need to extract the data:

```bash
# Download the dataset from HuggingFace
git clone https://huggingface.co/datasets/RyanWW/XModBench

cd XModBench

# Extract the Data.zip file
unzip Data.zip

# Now you have the following structure:
```

### Directory Structure

```
XModBench/
β”œβ”€β”€ Data/                              # Unzipped from Data.zip
β”‚   β”œβ”€β”€ landscape_audiobench/          # Nature sound scenes
β”‚   β”œβ”€β”€ emotions/                      # Emotion classification data
β”‚   β”œβ”€β”€ solos_processed/               # Musical instrument solos
β”‚   β”œβ”€β”€ gtzan-dataset-music-genre-classification/  # Music genre data
β”‚   β”œβ”€β”€ singers_data_processed/        # Singer identification
β”‚   β”œβ”€β”€ temporal_audiobench/           # Temporal reasoning tasks
β”‚   β”œβ”€β”€ urbansas_samples_videos_filtered/  # Urban 3D movements
β”‚   β”œβ”€β”€ STARSS23_processed_augmented/  # Spatial audio panorama
β”‚   β”œβ”€β”€ vggss_audio_bench/             # Fine-grained audio-visual
β”‚   β”œβ”€β”€ URMP_processed/                # Musical instrument arrangements
β”‚   β”œβ”€β”€ ExtremCountAV/                 # Counting tasks
β”‚   β”œβ”€β”€ posters/                       # Movie posters
β”‚   └── trailer_clips/                 # Movie trailers
β”‚
└── tasks/                             # Task configurations (ready to use)
    β”œβ”€β”€ 01_perception/                 # Perception tasks
    β”‚   β”œβ”€β”€ finegrained/               # Fine-grained recognition
    β”‚   β”œβ”€β”€ natures/                   # Nature scenes
    β”‚   β”œβ”€β”€ instruments/               # Musical instruments
    β”‚   β”œβ”€β”€ instruments_comp/          # Instrument compositions
    β”‚   └── general_activities/        # General activities
    β”œβ”€β”€ 02_spatial/                    # Spatial reasoning tasks
    β”‚   β”œβ”€β”€ 3D_movements/              # 3D movement tracking
    β”‚   β”œβ”€β”€ panaroma/                  # Panoramic spatial audio
    β”‚   └── arrangements/              # Spatial arrangements
    β”œβ”€β”€ 03_speech/                     # Speech and language tasks
    β”‚   β”œβ”€β”€ recognition/               # Speech recognition
    β”‚   └── translation/               # Translation
    β”œβ”€β”€ 04_temporal/                   # Temporal reasoning tasks
    β”‚   β”œβ”€β”€ count/                     # Temporal counting
    β”‚   β”œβ”€β”€ order/                     # Temporal ordering
    β”‚   └── calculation/               # Temporal calculations
    └── 05_Exteral/                    # Additional classification tasks
        β”œβ”€β”€ emotion_classification/    # Emotion recognition
        β”œβ”€β”€ music_genre_classification/ # Music genre
        β”œβ”€β”€ singer_identification/     # Singer identification
        └── movie_matching/            # Movie matching
```

**Note**: All file paths in the task JSON files use relative paths (`./benchmark/Data/...`), so ensure your working directory is set correctly when running evaluations.



### Basic Usage

```bash


#!/bin/bash
#SBATCH --job-name=VLM_eval        
#SBATCH --output=log/job_%j.out
#SBATCH --error=log/job_%j.log                        
#SBATCH --ntasks-per-node=1
#SBATCH --gpus-per-node=4

echo "Running on host: $(hostname)"
echo "CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES"

module load conda
# conda activate vlm
conda activate omni

export audioBench='/home/xwang378/scratch/2025/AudioBench'

# python $audioBench/scripts/run.py \
#     --model gemini \
#     --task_name perception/vggss_audio_vision \
#     --sample 1000


# python $audioBench/scripts/run.py \
#     --model gemini \
#     --task_name perception/vggss_vision_audio \
#     --sample 1000

# python $audioBench/scripts/run.py \
#     --model gemini \
#     --task_name perception/vggss_vision_text \
#     --sample 1000

# python $audioBench/scripts/run.py \
#     --model gemini \
#     --task_name perception/vggss_audio_text \
#     --sample 1000

# Qwen2.5-Omni

# python $audioBench/scripts/run.py \
#         --model qwen2.5_omni \
#         --task_name perception/vggss_audio_text \
#         --sample 1000

python $audioBench/scripts/run.py \
        --model qwen2.5_omni \
        --task_name perception/vggss_vision_text \
        --sample 1000


```



## πŸ“ˆ Benchmark Results

### Overall Performance Comparison

| Model | Perception | Spatial | Temporal | Linguistic | Knowledge | Average |
|-------|------------|---------|----------|------------|-----------|---------|
| **Gemini 2.5 Pro** | 75.9% | 50.1% | 60.8% | 76.8% | 89.3% | 70.6% |
| **Human Performance** | 91.0% | 89.7% | 88.9% | 93.9% | 93.9% | 91.5% |

### Key Findings

#### 1️⃣ Task Competence Gaps
- **Strong Performance**: Perception and linguistic tasks (~75% for best models)
- **Weak Performance**: Spatial (50.1%) and temporal reasoning (60.8%)
- **Performance Drop**: 15-25 points decrease in spatial/temporal vs. perception tasks

#### 2️⃣ Modality Disparity
- **Audio vs. Text**: 20-49 point performance drop
- **Audio vs. Vision**: 33-point average gap
- **Vision vs. Text**: ~15-point disparity
- **Consistency**: Best models show 10-12 point standard deviation

#### 3️⃣ Directional Imbalance
- **Vision↔Text**: 9-17 point gaps between directions
- **Audio↔Text**: 6-8 point asymmetries
- **Root Cause**: Training data imbalance favoring image-to-text over inverse directions

## πŸ“ Citation

If you use XModBench in your research, please cite our paper:

```bibtex
@article{wang2024xmodbench,
  title={XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models},
  author={Wang, Xingrui, etc.},
  journal={arXiv preprint arXiv:2510.15148},
  year={2024}
}
```

## πŸ“„ License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## πŸ™ Acknowledgments

We thank all contributors and the research community for their valuable feedback and suggestions.

## πŸ“§ Contact

- **Project Lead**: Xingrui Wang
- **Email**: [xwang378@jh.edu]
- **Website**: [https://xingruiwang.github.io/projects/XModBench/](https://xingruiwang.github.io/projects/XModBench/)

## πŸ”— Links

- [Project Website](https://xingruiwang.github.io/projects/XModBench/)
- [Paper](https://arxiv.org/abs/xxxx.xxxxx)
- [Leaderboard](https://xingruiwang.github.io/projects/XModBench/leaderboard)
- [Documentation](https://xingruiwang.github.io/projects/XModBench/docs)


## Todo

- [ ] Release Huggingface data
- [x] Release data processing code
- [x] Release data evaluation code
---

**Note**: XModBench is actively maintained and regularly updated with new models and evaluation metrics. For the latest updates, please check our [releases](https://github.com/XingruiWang/XModBench/releases) page.