RAG QA Logs & Corpus β€” Multi-Table Dataset for Retrieval QA Evaluation

#3
by tarekmasryo - opened

πŸ”Ž Production-style synthetic RAG evaluation data: corpus β†’ chunks β†’ retrieval/rerank β†’ answers β†’ judgments.

πŸ“¦ Dataset snapshot

  • Multi-table package with joinable IDs (doc/chunk/query/run-style keys)
  • Includes evaluation outcomes (correctness / hallucination / faithfulness-style signals)

πŸ”Ž What’s inside

  • Corpus + chunked text tables
  • Query/log tables (runs, scenarios, strategies)
  • Retrieval and ranking outputs (top-k style results)
  • Answer + evaluation/judgment tables

βœ… Good for
πŸ“Š RAG dashboards (quality Γ— latency Γ— cost trade-offs)
πŸ§ͺ Comparing retrieval strategies (BM25 vs dense vs hybrid; with/without rerank)
πŸ€– Risk scoring prototypes (predict likely failure/hallucination from telemetry)

⚠️ Notes

  • Fully synthetic benchmark β†’ great for workflows and regression testing examples.
  • Keep strict group-safe splits when training/validating to avoid contamination.

🧾 Not for high-stakes decisions (clinical/legal/financial).

Sign up or log in to comment