Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,11 +18,13 @@ configs:
|
|
| 18 |
data_files: tabfact_query.jsonl
|
| 19 |
---
|
| 20 |
|
| 21 |
-
π [
|
| 22 |
|
| 23 |
-
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
| 26 |
| Dataset | Link |
|
| 27 |
|-----------------------|------|
|
| 28 |
| MultiTableQA-TATQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
|
|
@@ -31,18 +33,8 @@ Other datasets in the MultiTableQA benchmark include:
|
|
| 31 |
| MultiTableQA-WTQ | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ) |
|
| 32 |
| MultiTableQA-HybridQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)|
|
| 33 |
|
| 34 |
-
---
|
| 35 |
-
|
| 36 |
-
### Sample Usage
|
| 37 |
-
|
| 38 |
-
To download and preprocess the **MultiTableQA** benchmark, navigate to the `table2graph` directory in the code repository and run the `prepare_data.sh` script:
|
| 39 |
-
|
| 40 |
-
```bash
|
| 41 |
-
cd table2graph
|
| 42 |
-
bash scripts/prepare_data.sh
|
| 43 |
-
```
|
| 44 |
|
| 45 |
-
|
| 46 |
|
| 47 |
---
|
| 48 |
# Citation
|
|
|
|
| 18 |
data_files: tabfact_query.jsonl
|
| 19 |
---
|
| 20 |
|
| 21 |
+
π [Paper](https://arxiv.org/abs/2504.01346) | π¨π»βπ» [Code](https://github.com/jiaruzouu/T-RAG)
|
| 22 |
|
| 23 |
+
## π Introduction
|
| 24 |
|
| 25 |
+
Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
|
| 26 |
+
|
| 27 |
+
For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
|
| 28 |
| Dataset | Link |
|
| 29 |
|-----------------------|------|
|
| 30 |
| MultiTableQA-TATQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
|
|
|
|
| 33 |
| MultiTableQA-WTQ | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ) |
|
| 34 |
| MultiTableQA-HybridQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)|
|
| 35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
+
MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
|
| 38 |
|
| 39 |
---
|
| 40 |
# Citation
|