RyanWW commited on
Commit
d82c458
·
verified ·
1 Parent(s): abbe20e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -25,23 +25,24 @@ XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Languag
25
 
26
  <p align="center">
27
  <a href="https://arxiv.org/abs/2510.15148">
28
- <img src="https://img.shields.io/badge/Paper-arXiv-red.svg" alt="Paper">
29
  </a>
30
  <a href="https://xingruiwang.github.io/projects/XModBench/">
31
- <img src="https://img.shields.io/badge/Website-XModBench-0a7aca?logo=globe&logoColor=white" alt="Website">
32
  </a>
33
  <a href="https://huggingface.co/datasets/RyanWW/XModBench">
34
- <img src="https://img.shields.io/badge/Dataset-XModBench-FFD21E?logo=huggingface" alt="Dataset">
35
  </a>
36
  <a href="https://github.com/XingruiWang/XModBench">
37
- <img src="https://img.shields.io/badge/Code-XModBench-181717?logo=github&logoColor=white" alt="GitHub Repo">
38
  </a>
39
  <a href="https://opensource.org/licenses/MIT">
40
- <img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT">
41
  </a>
42
  </p>
43
 
44
 
 
45
  XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models.
46
 
47
  ### Key Features
 
25
 
26
  <p align="center">
27
  <a href="https://arxiv.org/abs/2510.15148">
28
+ <img src="https://img.shields.io/badge/Arxiv-Paper-b31b1b.svg" alt="Paper">
29
  </a>
30
  <a href="https://xingruiwang.github.io/projects/XModBench/">
31
+ <img src="https://img.shields.io/badge/Website-Page-0a7aca?logo=globe&logoColor=white" alt="Website">
32
  </a>
33
  <a href="https://huggingface.co/datasets/RyanWW/XModBench">
34
+ <img src="https://img.shields.io/badge/Huggingface-Dataset-FFD21E?logo=huggingface" alt="Dataset">
35
  </a>
36
  <a href="https://github.com/XingruiWang/XModBench">
37
+ <img src="https://img.shields.io/badge/Github-Code-181717?logo=github&logoColor=white" alt="GitHub Repo">
38
  </a>
39
  <a href="https://opensource.org/licenses/MIT">
40
+ <img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
41
  </a>
42
  </p>
43
 
44
 
45
+
46
  XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models.
47
 
48
  ### Key Features