Integration: Thunderbolt
Use Thunderbolt as a cross-platform AI client for your Haystack pipelines through Hayhooks
Table of Contents
Overview
Thunderbolt is an open-source, cross-platform AI client developed by MZLA Technologies (Thunderbird). It runs on web, iOS, Android, Mac, Linux, and Windows, and works with any OpenAI-compatible model endpoint โ including self-hosted ones.
By exposing your Haystack pipeline through Hayhooks as an OpenAI-compatible endpoint, you can connect Thunderbolt to your pipeline and interact with it from any device โ without building a frontend yourself.
Thunderbolt is designed for enterprise on-prem deployments but can be self-hosted locally for development and testing.
Setup
1. Expose your Haystack pipeline with Hayhooks
Install Hayhooks:
pip install hayhooks
Create a pipeline wrapper that implements run_chat_completion:
# pipelines/my_rag/pipeline_wrapper.py
from typing import Generator
from haystack import Pipeline
from haystack.components.builders import ChatPromptBuilder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from hayhooks import BasePipelineWrapper, streaming_generator
class PipelineWrapper(BasePipelineWrapper):
def setup(self) -> None:
self.system_message = ChatMessage.from_system("You are a helpful assistant.")
prompt_builder = ChatPromptBuilder()
llm = OpenAIChatGenerator(model="gpt-4o-mini")
self.pipeline = Pipeline()
self.pipeline.add_component("prompt_builder", prompt_builder)
self.pipeline.add_component("llm", llm)
self.pipeline.connect("prompt_builder.prompt", "llm.messages")
def run_chat_completion(self, model: str, messages: list[dict], body: dict) -> Generator:
chat_messages = [self.system_message] + [
ChatMessage.from_openai_dict_format(msg) for msg in messages
]
return streaming_generator(
pipeline=self.pipeline,
pipeline_run_args={"prompt_builder": {"template": chat_messages}},
)
Start Hayhooks:
hayhooks run --pipelines-dir ./pipelines
This exposes your pipeline at http://localhost:1416/v1 as an OpenAI-compatible endpoint. See
Hayhooks OpenAI compatibility docs for details.
2. Deploy Thunderbolt
Follow the Thunderbolt deployment guide to self-host Thunderbolt with Docker Compose or Kubernetes, or run it locally for development. See the development guide to get started quickly.
Usage
Once Hayhooks is running and Thunderbolt is deployed:
- Open Thunderbolt and go to Settings โ Model Providers.
- Add a new provider with a custom OpenAI-compatible base URL pointing to your Hayhooks server (e.g.
http://localhost:1416/v1). - Select your Haystack pipeline as the model.
- Start chatting โ your messages are routed through Hayhooks to your Haystack pipeline.
This gives you a polished, cross-platform chat interface backed by whatever Haystack pipeline you choose โ RAG, agents, or a custom workflow.
License
Thunderbolt is licensed under the Mozilla Public License 2.0. Hayhooks is licensed under the Apache-2.0 license.
