Diagnosing and Addressing Pitfalls in KG-RAG Datasets: Toward More Reliable Benchmarking

Published in NeurIPS 2025 Proceedings, Datasets and Benchmarks Track, 2025

Download paper here

In this work, we address key challenges found in widely used KGQA datasets. In addition, we propose a universal framework that leverages LLMs to automatically create high-quality benchmarks for KGQA.

Recommended citation:
Download Paper