63: Benchmarking Graph-based RAG for Open-domain Question Answering
Tuesday, Aug 5: 2:00 PM - 3:50 PM
1048
Contributed Posters
Music City Center
We benchmark various graph-based retrieval-augmented generation (RAG) systems across a broad spectrum of query types, including OLTP-style (fact-based) and OLAP-style (thematic) queries, to address the complex demands of open-domain question answering (QA). Traditional RAG methods often fall short in handling nuanced, multi-document synthesis tasks. By structuring knowledge as graphs, we can facilitate the retrieval of context that captures greater semantic depth and enhances language model operations. We explore various graph-based RAG methodologies and introduce TREX, a novel, cost-effective alternative that combines graph-based and vector-based retrieval techniques. Our extensive benchmarking across four diverse datasets highlights scenarios where each approach excels and reveals the limitations of current evaluation methods, motivating new metrics for assessing answer correctness. In a real-world technical support case study, we demonstrate how graph-based RAG can surpass conventional vector-based RAG in efficiently synthesizing data from heterogeneous sources.
GraphRAG
TREX
question answering
LLM
Large Language Models
benchmarking
Main Sponsor
Section on Text Analysis
You have unsaved changes.