Datasets:
license: mit
configs:
- config_name: table
data_files: wtq_table.jsonl
- config_name: test_query
data_files: wtq_query.jsonl
task_categories:
- table-question-answering
π Paper | π¨π»βπ» Code
π Introduction
Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on text corpora, real-world information is often stored in tables across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across multiple heterogeneous tables.
For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
| Dataset | Link |
|---|---|
| MultiTableQA-TATQA | π€ dataset link |
| MultiTableQA-TabFact | π€ dataset link |
| MultiTableQA-SQA | π€ dataset link |
| MultiTableQA-WTQ | π€ dataset link |
| MultiTableQA-HybridQA | π€ dataset link |
MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
Citation
If you find our work useful, please cite:
@misc{zou2025rag,
title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
year={2025},
eprint={2504.01346},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.01346},
}