RAPTOR Recursive Abstractive Processing for Tree-Organized Retrieval

February 19, 2024

A new information retrieval paper was published recently, RAPTOR (Recursive Abstractive Processing for Tree-Organized Retrieval) represents a leap forward in the domain of retrieval-augmented language models. Developed by a team from Stanford University, RAPTOR addresses the critical limitation of existing models that struggle with incorporating comprehensive document context during retrieval, thus hindering their ability to adapt to new information and access detailed knowledge.

RAPTOR introduces a novel method that recursively embeds, clusters, and summarizes text chunks, constructing a hierarchical tree that captures information at various levels of abstraction. This tree structure, rich in layered summaries, allows the model to retrieve information that spans across a document efficiently, ensuring that even complex, multi-step reasoning tasks benefit from a holistic understanding of the content. The paper summarizes it thus:

“Building on the idea that long texts often present subtopics and hierarchical structures (Cao & Wang, 2022; Dong et al., 2023b), RAPTOR addresses the issue of semantic depth and connection in reading by building a recursive tree structure that balances broader thematic comprehension with granular details and which allows nodes to be grouped based on semantic similarity not just order in the text.”

Said more simply, this approach relies on the high likely hood that a given document or corpus is made up of subtopics with related topics and subtopics spread throughout. RAPTOR chunks and collects these together creating summaries and then summarizes the summaries enriching the resulting tree and making searches more effective.

The significance of this approach is underscored by its performance improvement, especially when coupled with models like GPT-4, demonstrating a 20% increase in accuracy on the QuALITY benchmark for question-answering tasks.

The advent of large language models (LLMs) has undeniably revolutionized NLP, offering impressive capabilities in generating human-like text and answering queries with a high degree of accuracy. However, their reliance on static knowledge encoded during training limits their adaptability to new information or specific domain knowledge. RAPTOR’s and more generally Retrival Augmented Generation (RAG) circumvents this issue by dynamically integrating external, up-to-date information into the LLM’s processing, enhancing both the model’s flexibility and its depth of understanding.

(Cao & Wang, 2022; Dong et al., 2023b)

RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval (arxiv.org)