MultiTableQA_WTQ / README.md
jiaruz2's picture
Update README.md
2c1bdd7 verified
metadata
license: mit
configs:
  - config_name: table
    data_files: wtq_table.jsonl
  - config_name: test_query
    data_files: wtq_query.jsonl
task_categories:
  - table-question-answering

πŸ“„ Paper | πŸ‘¨πŸ»β€πŸ’» Code

πŸ” Introduction

Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on text corpora, real-world information is often stored in tables across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across multiple heterogeneous tables.

For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:

Dataset Link
MultiTableQA-TATQA πŸ€— dataset link
MultiTableQA-TabFact πŸ€— dataset link
MultiTableQA-SQA πŸ€— dataset link
MultiTableQA-WTQ πŸ€— dataset link
MultiTableQA-HybridQA πŸ€— dataset link

MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.


Citation

If you find our work useful, please cite:

@misc{zou2025rag,
      title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking}, 
      author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
      year={2025},
      eprint={2504.01346},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.01346}, 
}