fastRAG is a research framework for efficient and optimized retrieval augmented generative pipelines
fastRAG is a research framework, that extends Haystack, with abilities to build efficient and optimized retrieval augmented generative pipelines (with emphasis on Intel hardware), incorporating state-of-the-art LLMs and Information Retrieval modules.
- Optimized RAG: Build RAG pipelines with SOTA efficient components for greater compute efficiency.
- Optimized for Intel Hardware: Leverage Intel extensions for PyTorch (IPEX), 🤗 Optimum Intel and 🤗 Optimum-Habana for running as optimal as possible on Intel® Xeon® Processors and Intel® Gaudi® AI accelerators.
- Customizable: fastRAG is built using Haystack and HuggingFace. All of fastRAG’s components are 100% Haystack compatible.
For a brief overview of the various unique components in fastRAG refer to the Components Overview page.
|Intel Gaudi Accelerators
|Running LLMs on Gaudi 2
|Running LLMs with optimized ONNX-runtime
|Running RAG Pipelines with LLMs on a Llama CPP backend
|Optimized int8 bi-encoders
|Token-based late interaction
|Generative multi-document encoder-decoder
|Improved multi-document decoder
|Incredibly efficient indexing engine
- Python 3.8 or higher.
- PyTorch 2.0 or higher.
To set up the software, clone the project and run the following, preferably in a newly created virtual environment:
git clone https://github.com/IntelLabs/fastRAG.git
There are several dependencies to consider, depending on your specific usage:
pip install .
fastRAG with Intel-optimized backend:
pip install .[intel]
Other installation options can be found here.