Datasets:
File size: 2,709 Bytes
7e3aee3 bcead9d 7e3aee3 bcead9d ff3dd59 bcead9d ff3dd59 bcead9d ff3dd59 4902993 ff3dd59 bcead9d 4902993 bcead9d ff3dd59 4902993 bcead9d ff3dd59 bcead9d ff3dd59 bcead9d ff3dd59 bcead9d ff3dd59 bcead9d ff3dd59 bcead9d 4902993 bcead9d ff3dd59 bcead9d ff3dd59 bcead9d ff3dd59 bcead9d ff3dd59 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
license: apache-2.0
task_categories:
- visual-question-answering
language:
- en
pretty_name: BenchLMM
size_categories:
- n<1K
---
# Dataset Card for BenchLMM
BenchLMM is a benchmarking dataset focusing on the cross-style visual capability of large multimodal models. It evaluates these models' performance in various visual contexts.
## Dataset Details
### Dataset Description
- **Curated by:** Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.
- **Funded by :** Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
- **Shared by :** AIFEG.
- **Language(s) (NLP):** English.
- **License:** Apache-2.0.
### Dataset Sources
- **Repository:** [GitHub - AIFEG/BenchLMM](https://github.com/AIFEG/BenchLMM)
- **Paper :** Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.
## Uses
### Direct Use
The dataset can be used to benchmark large multimodal models, especially focusing on their capability to interpret and respond to different visual styles.
## Dataset Structure
- **Directory Structure:**
- `baseline/`: Baseline code for LLaVA and InstructBLIP.
- `evaluate/`: Python code for model evaluation.
- `evaluate_results/`: Evaluation results of baseline models.
- `jsonl/`: JSONL files with questions, image locations, and answers.
## Dataset Creation
### Curation Rationale
Developed to assess large multimodal models' performance in diverse visual contexts, helping to understand their capabilities and limitations.
### Source Data
#### Data Collection and Processing
The dataset consists of various visual questions and corresponding answers, structured to evaluate multimodal model performance.
## Bias, Risks, and Limitations
Users should consider the specific visual contexts and question types included in the dataset when interpreting model performance.
## Citation
**BibTeX:**
@misc{cai2023benchlmm,
title={BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models},
author={Rizhao Cai and Zirui Song and Dayan Guan and Zhenhao Chen and Xing Luo and Chenyu Yi and Alex Kot},
year={2023},
eprint={2312.02896},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
**APA:**
Cai, R., Song, Z., Guan, D., Chen, Z., Luo, X., Yi, C., & Kot, A. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv preprint arXiv:2312.02896.
## Acknowledgements
This research is supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
|