You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GLOW-QA: Leveraging LLM-GNN Integration for Open-World Question Answering over Knowledge Graphs
Abstract
Open-world Question Answering (OW-QA) over knowledge graphs (KGs) aims to answer questions over incomplete or evolving KGs.
Traditional KGQA assumes a closed world where answers must exist in the KG, limiting real-world applicability.
In contrast, open-world QA requires inferring missing knowledge based on graph structure and context.
Large language models (LLMs) excel at language understanding but lack structured reasoning.
Graph neural networks (GNNs) model graph topology but struggle with semantic interpretation.
Existing systems integrate LLMs with GNNs or graph retrievers. Some support open-world QA but rely on structural embeddings without semantic grounding.
Most assume observed paths or complete graphs, making them unreliable under missing links or multi-hop reasoning.
We present GLOW, a hybrid system that combines a pre-trained GNN and an LLM for open-world KGQA.
The GNN predicts top-$k$ candidate answers from the graph structure. These, along with relevant KG facts, are serialized into a structured prompt (e.g., triples and candidates) to guide the LLM's reasoning.
This enables joint reasoning over symbolic and semantic signals, without relying on retrieval or fine-tuning.
To evaluate generalization, we introduce GLOW-Bench, a 1,000-question benchmark over incomplete KGs across diverse domains.
GLOW outperforms existing LLM–GNN systems on standard benchmarks and GLOW-Bench, achieving up to 53.3\% and an average 38\% improvement.
Fig1: The GLOW-QA Pipeline Phases
Table 1: Average accuracy (%) on open-world KGQA tasks, grouped by reasoning hop count.
GLOW-GN significantly outperforms the baseline methods, especially on both 1-hop and 2-hop questions.
All methods use Qwen3-8B as the underlying LLM.