RAG QA Logs & Corpus β Multi-Table Dataset for Retrieval QA Evaluation
#3
by
tarekmasryo - opened
π Production-style synthetic RAG evaluation data: corpus β chunks β retrieval/rerank β answers β judgments.
π¦ Dataset snapshot
- Multi-table package with joinable IDs (doc/chunk/query/run-style keys)
- Includes evaluation outcomes (correctness / hallucination / faithfulness-style signals)
π Whatβs inside
- Corpus + chunked text tables
- Query/log tables (runs, scenarios, strategies)
- Retrieval and ranking outputs (top-k style results)
- Answer + evaluation/judgment tables
β
Good for
π RAG dashboards (quality Γ latency Γ cost trade-offs)
π§ͺ Comparing retrieval strategies (BM25 vs dense vs hybrid; with/without rerank)
π€ Risk scoring prototypes (predict likely failure/hallucination from telemetry)
β οΈ Notes
- Fully synthetic benchmark β great for workflows and regression testing examples.
- Keep strict group-safe splits when training/validating to avoid contamination.
π§Ύ Not for high-stakes decisions (clinical/legal/financial).