jiaruz2 commited on
Commit
c89bcc4
Β·
verified Β·
1 Parent(s): 9410791

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -14
README.md CHANGED
@@ -18,11 +18,13 @@ configs:
18
  data_files: tabfact_query.jsonl
19
  ---
20
 
21
- πŸ“„ [RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG) | πŸ€— [MultiTableQA Hub Collection](https://huggingface.co/collections/jiaruz2/multitableqa-68dc8d850ea7e168f47cecd8)
22
 
23
- This repository contains **MultiTableQA-TabFact**, one of the five datasets released as part of the comprehensive **MultiTableQA** benchmark. MultiTableQA-TabFact focuses on **table fact-checking** tasks. The overall MultiTableQA benchmark extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
24
 
25
- Other datasets in the MultiTableQA benchmark include:
 
 
26
  | Dataset | Link |
27
  |-----------------------|------|
28
  | MultiTableQA-TATQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
@@ -31,18 +33,8 @@ Other datasets in the MultiTableQA benchmark include:
31
  | MultiTableQA-WTQ | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ) |
32
  | MultiTableQA-HybridQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)|
33
 
34
- ---
35
-
36
- ### Sample Usage
37
-
38
- To download and preprocess the **MultiTableQA** benchmark, navigate to the `table2graph` directory in the code repository and run the `prepare_data.sh` script:
39
-
40
- ```bash
41
- cd table2graph
42
- bash scripts/prepare_data.sh
43
- ```
44
 
45
- This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.
46
 
47
  ---
48
  # Citation
 
18
  data_files: tabfact_query.jsonl
19
  ---
20
 
21
+ πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)
22
 
23
+ ## πŸ” Introduction
24
 
25
+ Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
26
+
27
+ For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
28
  | Dataset | Link |
29
  |-----------------------|------|
30
  | MultiTableQA-TATQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
 
33
  | MultiTableQA-WTQ | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ) |
34
  | MultiTableQA-HybridQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)|
35
 
 
 
 
 
 
 
 
 
 
 
36
 
37
+ MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
38
 
39
  ---
40
  # Citation