Hybrid RAG Pipeline with Breakpoints
Last Updated: July 31, 2025
This notebook demonstrates how to setup breakpoints in a Haystack pipeline. In this case, we will set up break points in a hybrid retrieval-augmented generation (RAG) pipeline. The pipeline combines BM25 and embedding-based retrieval methods, then uses a transformer-based reranker and an LLM to generate answers.
Install packages
%%bash
pip install haystack-ai>=2.16.0
pip install "transformers[torch,sentencepiece]"
pip install "sentence-transformers>=3.0.0"
Setup OpenAI API keys
import os
from getpass import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass("Enter OpenAI API key:")
Import Required Libraries
First, let’s import all the necessary components from Haystack.
from haystack import Document, Pipeline
from haystack.components.builders import AnswerBuilder, ChatPromptBuilder
from haystack.components.embedders import SentenceTransformersDocumentEmbedder, SentenceTransformersTextEmbedder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.joiners import DocumentJoiner
from haystack.components.rankers import TransformersSimilarityRanker
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever, InMemoryEmbeddingRetriever
from haystack.components.writers import DocumentWriter
from haystack.dataclasses import ChatMessage
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.document_stores.types import DuplicatePolicy
Document Store Initializations
Let’s create a simple document store with some sample documents and their embeddings.
def indexing():
"""
Indexing documents in a DocumentStore.
"""
print("Indexing documents...")
# Create sample documents
documents = [
Document(content="My name is Jean and I live in Paris. The weather today is 25°C."),
Document(content="My name is Mark and I live in Berlin. The weather today is 15°C."),
Document(content="My name is Giorgio and I live in Rome. The weather today is 30°C."),
]
# Initialize document store and components
document_store = InMemoryDocumentStore()
doc_writer = DocumentWriter(document_store=document_store, policy=DuplicatePolicy.SKIP)
doc_embedder = SentenceTransformersDocumentEmbedder(model="intfloat/e5-base-v2", progress_bar=False)
# Build and run the ingestion pipeline
ingestion_pipe = Pipeline()
ingestion_pipe.add_component(instance=doc_embedder, name="doc_embedder")
ingestion_pipe.add_component(instance=doc_writer, name="doc_writer")
ingestion_pipe.connect("doc_embedder.documents", "doc_writer.documents")
ingestion_pipe.run({"doc_embedder": {"documents": documents}})
return document_store
A Hybrid Retrieval Pipeline
Now let’s build a hybrid RAG pipeline.
def hybrid_retrieval(doc_store):
"""
A simple pipeline for hybrid retrieval using BM25 and embeddings.
"""
# Initialize query embedder
query_embedder = SentenceTransformersTextEmbedder(model="intfloat/e5-base-v2", progress_bar=False)
# Define the prompt template for the LLM
template = [
ChatMessage.from_system(
"You are a helpful AI assistant. Answer the following question based on the given context information only. If the context is empty or just a '\n' answer with None, example: 'None'."
),
ChatMessage.from_user(
"""
Context:
{% for document in documents %}
{{ document.content }}
{% endfor %}
Question: {{question}}
"""
)
]
# Build the RAG pipeline
rag_pipeline = Pipeline()
# Add components to the pipeline
rag_pipeline.add_component(instance=InMemoryBM25Retriever(document_store=doc_store), name="bm25_retriever")
rag_pipeline.add_component(instance=query_embedder, name="query_embedder")
rag_pipeline.add_component(instance=InMemoryEmbeddingRetriever(document_store=doc_store), name="embedding_retriever")
rag_pipeline.add_component(instance=DocumentJoiner(sort_by_score=False), name="doc_joiner")
rag_pipeline.add_component(instance=TransformersSimilarityRanker(model="intfloat/simlm-msmarco-reranker", top_k=5), name="ranker")
rag_pipeline.add_component(instance=ChatPromptBuilder(template=template, required_variables=["question", "documents"]), name="prompt_builder", )
rag_pipeline.add_component(instance=OpenAIChatGenerator(), name="llm")
rag_pipeline.add_component(instance=AnswerBuilder(), name="answer_builder")
# Connect the components
rag_pipeline.connect("query_embedder", "embedding_retriever.query_embedding")
rag_pipeline.connect("embedding_retriever", "doc_joiner.documents")
rag_pipeline.connect("bm25_retriever", "doc_joiner.documents")
rag_pipeline.connect("doc_joiner", "ranker.documents")
rag_pipeline.connect("ranker", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "llm")
rag_pipeline.connect("llm.replies", "answer_builder.replies")
rag_pipeline.connect("doc_joiner", "answer_builder.documents")
return rag_pipeline
Running the pipeline with breakpoints
Now we demonstrate how to set breakpoints in a Haystack pipeline to inspect and debug the pipeline execution at specific points. Breakpoints allow you to pause execution, save the current state of pipeline, and later resume from where you left off.
We’ll run the pipeline with a breakpoint set at the query_embedder
component. This will save the pipeline state before executing the query_embedder
and raise PipelineBreakpointException
to stop execution.
from haystack.dataclasses.breakpoints import Breakpoint
break_point = Breakpoint(component_name="query_embedder", visit_count=0, snapshot_file_path="snapshots/")
# Initialize document store and pipeline
doc_store = indexing()
pipeline = hybrid_retrieval(doc_store)
# Define the query
question = "Where does Mark live?"
data = {
"query_embedder": {"text": question},
"bm25_retriever": {"query": question},
"ranker": {"query": question, "top_k": 10},
"prompt_builder": {"question": question},
"answer_builder": {"query": question},
}
pipeline.run(data, break_point=break_point)
Indexing documents...
TransformersSimilarityRanker is considered legacy and will no longer receive updates. It may be deprecated in a future release, with removal following after a deprecation period. Consider using SentenceTransformersSimilarityRanker instead, which provides the same functionality along with additional features.
---------------------------------------------------------------------------
BreakpointException Traceback (most recent call last)
Cell In[6], line 15
6 question = "Where does Mark live?"
7 data = {
8 "query_embedder": {"text": question},
9 "bm25_retriever": {"query": question},
(...)
12 "answer_builder": {"query": question},
13 }
---> 15 pipeline.run(data, break_point=break_point)
File ~/haystack-cookbook/.venv/lib/python3.12/site-packages/haystack/core/pipeline/pipeline.py:378, in Pipeline.run(self, data, include_outputs_from, break_point, pipeline_snapshot)
376 # trigger the breakpoint if needed
377 if should_trigger_breakpoint:
--> 378 _trigger_break_point(
379 pipeline_snapshot=new_pipeline_snapshot, pipeline_outputs=pipeline_outputs
380 )
382 component_outputs = self._run_component(
383 component_name=component_name,
384 component=component,
(...)
387 parent_span=span,
388 )
390 # Updates global input state with component outputs and returns outputs that should go to
391 # pipeline outputs.
File ~/haystack-cookbook/.venv/lib/python3.12/site-packages/haystack/core/pipeline/breakpoint.py:299, in _trigger_break_point(pipeline_snapshot, pipeline_outputs)
297 component_visits = pipeline_snapshot.pipeline_state.component_visits
298 msg = f"Breaking at component {component_name} at visit count {component_visits[component_name]}"
--> 299 raise BreakpointException(
300 message=msg, component=component_name, inputs=pipeline_snapshot.pipeline_state.inputs, results=pipeline_outputs
301 )
BreakpointException: Breaking at component query_embedder at visit count 0
This run should be interruped with a BreakpointException: Breaking at component query_embedder visit count 0
- and this will generate a JSON file in the “snapshots” directory containing a snapshot of the before running the component query_embedder
.
The snapshot files, named after the component associated with the breakpoint, can be inspected and edited, and later injected into a pipeline and resume the execution from the point where the breakpoint was triggered.
!ls snapshots/
Resuming from a break point
We can then resume a pipeline from its saved pipeline_snapshot
by passing it to the Pipeline.run() method. This will run the pipeline to the end.
# Load the pipeline_snapshot and continue execution
from haystack.core.pipeline.breakpoint import load_pipeline_snapshot
snapshot = load_pipeline_snapshot("snapshots/query_embedder_2025_07_26_12_58_26.json")
result = pipeline.run(data={}, pipeline_snapshot=snapshot)
# Print the results
print(result['answer_builder']['answers'][0].data)
print(result['answer_builder']['answers'][0].meta)
Mark lives in Berlin.
{'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 5, 'prompt_tokens': 124, 'total_tokens': 129, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'all_messages': [ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[TextContent(text='Mark lives in Berlin.')], _name=None, _meta={'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 5, 'prompt_tokens': 124, 'total_tokens': 129, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}})]}
Advanced Use Cases for Pipeline Breakpoints
Here are some advanced scenarios where pipeline breakpoints can be particularly valuable:
-
Set a breakpoint at the LLM to try results of different prompts and iterate in real time.
-
Place a breakpoint after the document retriever to examine and modify retrieved documents.
-
Set a breakpoint before a component to inject gold-standard inputs and isolate whether issues stem from input quality or downstream logic.
To demonstrate the use case stated in point 1, we reuse the same query pipeline with a new question. First, we run the pipeline with the prompt that we originally passed to the prompt_builder. Then, we define a breakpoint at the prompt_builder to try an alternative prompt. This allows us to compare the results generated by different prompts without running the whole pipeline again.
# Initialize document store and pipeline
doc_store = indexing()
pipeline = hybrid_retrieval(doc_store)
# Define the query
question = "What's the temperature difference between the warmest and coldest city?"
data = {
"query_embedder": {"text": question},
"bm25_retriever": {"query": question},
"ranker": {"query": question, "top_k": 10},
"prompt_builder": {"question": question},
"answer_builder": {"query": question},
}
break_point = Breakpoint(component_name="prompt_builder", visit_count=0, snapshot_file_path="snapshots/")
pipeline.run(data, break_point=break_point)
TransformersSimilarityRanker is considered legacy and will no longer receive updates. It may be deprecated in a future release, with removal following after a deprecation period. Consider using SentenceTransformersSimilarityRanker instead, which provides the same functionality along with additional features.
Indexing documents...
---------------------------------------------------------------------------
BreakpointException Traceback (most recent call last)
Cell In[11], line 18
7 data = {
8 "query_embedder": {"text": question},
9 "bm25_retriever": {"query": question},
(...)
12 "answer_builder": {"query": question},
13 }
16 break_point = Breakpoint(component_name="prompt_builder", visit_count=0, snapshot_file_path="snapshots/")
---> 18 pipeline.run(data, break_point=break_point)
File ~/haystack-cookbook/.venv/lib/python3.12/site-packages/haystack/core/pipeline/pipeline.py:378, in Pipeline.run(self, data, include_outputs_from, break_point, pipeline_snapshot)
376 # trigger the breakpoint if needed
377 if should_trigger_breakpoint:
--> 378 _trigger_break_point(
379 pipeline_snapshot=new_pipeline_snapshot, pipeline_outputs=pipeline_outputs
380 )
382 component_outputs = self._run_component(
383 component_name=component_name,
384 component=component,
(...)
387 parent_span=span,
388 )
390 # Updates global input state with component outputs and returns outputs that should go to
391 # pipeline outputs.
File ~/haystack-cookbook/.venv/lib/python3.12/site-packages/haystack/core/pipeline/breakpoint.py:299, in _trigger_break_point(pipeline_snapshot, pipeline_outputs)
297 component_visits = pipeline_snapshot.pipeline_state.component_visits
298 msg = f"Breaking at component {component_name} at visit count {component_visits[component_name]}"
--> 299 raise BreakpointException(
300 message=msg, component=component_name, inputs=pipeline_snapshot.pipeline_state.inputs, results=pipeline_outputs
301 )
BreakpointException: Breaking at component prompt_builder at visit count 0
Now we can manually insert a different template into the prompt_builder
and inspect the results. To do this, we update the template input within the prompt_builder
component in the state file.
template = ChatMessage.from_system(
"""You are a mathematical analysis assistant. Follow these steps:
1. Identify all temperatures mentioned
2. Find the maximum and minimum values
3. Calculate their difference
4. Format response as: 'The temperature difference is X°C (max Y°C in [city] - min Z°C in [city])'
Use ONLY the information provided in the context."""
)
Now we just load the snapshot file and resume the pipeline with the updated snapshot.
!ls snapshots/prompt_builder*
snapshots/prompt_builder_2025_07_26_13_01_23.json
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
snapshot = load_pipeline_snapshot("snapshots/prompt_builder_2025_07_26_13_01_23.json")
result = pipeline.run(data={}, pipeline_snapshot=snapshot)
print(result['answer_builder']['answers'][0].data)
The temperature in Rome is 30°C and in Berlin is 15°C. The temperature difference between the warmest (Rome) and the coldest (Berlin) city is 30°C - 15°C = 15°C.