73 Total Integrations
Amazon Bedrock
Use Models from AI21 Labs, Anthropic, Cohere, Meta, and Amazon via Amazon Bedrock with Haystack
Amazon Sagemaker
Use Models from Huggingface, Anthropic, AI21 Labs, Cohere, Meta, and Amazon via Amazon Sagemaker with Haystack
Anthropic
Use Anthropic Models with Haystack
Apify
Extract data from the web and automate web tasks using Apify-Haystack integration.
Arize Phoenix
Trace your Haystack pipelines with Arize Phoenix
Arize AI
Trace and Monitor your Haystack pipelines with Arize AI
AssemblyAI
Use AssemblyAI transcription, summarization and speaker diarization models with Haystack
AstraDB
A Document Store for storing and retrieval from AstraDB - built for Haystack 2.0.
Azure AI Search
Use Azure AI Search with Haystack
Azure CosmosDB
Use Azure CosmosDB with Haystack
Azure Translate Nodes
TranslateAnswer and TranslateQuery Nodes that use the Azure Translate endpoint
Azure
Use OpenAI models deployed through Azure services with Haystack
Basic Agent Memory Tool
A working memory that stores the Agent's conversation memory
Cerebras
Use LLMs served by Cerebras API
Chainlit Agent UI
Visualise and debug your agent's intermediary steps!
Chroma
A Document Store for storing and retrieval from Chroma
Cohere
Use Cohere models with Haystack
Context AI
A component to log conversations for analytics by Context.ai - built for Haystack 2.0.
Couchbase
Use the Couchbase database with Haystack
DeepEval
Use the DeepEval evaluation framework to calculate model-based metrics
DeepL
Use DeepL translation services with Haystack
Document Threshold
This component filters documents based on a minimum Confidence Score percentage, ensuring only the documents above the threshold get passed down the pipeline.
DuckDuckGo
Uses DuckDuckGo API for web searches
Elasticsearch
Use an Elasticsearch database with Haystack
Elevenlabs
ElevenLabs Text-to-Speech components for Haystack.
Entailment Checker
Haystack node for checking the entailment between a statement and a list of Documents
FAISS
Use a FAISS vector database with Haystack
FastEmbed
Use the FastEmbed embedding models
fastRAG
fastRAG is a research framework for efficient and optimized retrieval augmented generative pipelines
Flow Judge
Evaluate Haystack pipelines using Flow Judge
Google AI
Use Google AI Models with Haystack
Google Vertex AI
Use Google Vertex AI Models with Haystack
Groq
Use open Language Models served by Groq
Hugging Face
Use Models on Hugging Face with Haystack
INSTRUCTOR Embedders
A component for computing embeddings using INSTRUCTOR embedding models - built for Haystack 2.0.
Jina AI
Use the latest Jina AI embedding models
LanceDB Haystack
A DocumentStore backed by LanceDB
langfuse
Monitor and trace your Haystack requests.
Document Lemmatizer
A lemmatizing node for documents which can potentially reduce token use by up to 30%.
Llama.cpp
Use Llama.cpp models with Haystack.
llamafile
Run LLMs locally with llamafile
LM Format Enforcer
Use the LM Format Enforcer to enforce JSON Schema / Regex output of your Local Models.
Marqo
A Document Store for storing and retrieval from Marqo - built for Haystack 2.0
Mastodon Fetcher
A custom component to fetch a mastodon usernames latest posts
Milvus
Use the Milvus vector database with Haystack
Mistral
Use the Mistral API for embedding and text generation models.
mixedbread ai
Use mixedbread's models as well as top open-source models in seconds
MongoDB
Use a MongoDB Atlas database with Haystack
MonsterAPI
Use open Language Models served by MonsterAPI
Needle
Use Needle document store and retriever in Haystack.
Neo4j
Use the Neo4j database with Haystack
Newspaper3k Wrapper Nodes
Newspaper3k wrapper nodes. It allows to scrape articles directly using the scraper Node or crawling many pages using the crawler Node.
Notion Extractor
A component to extract pages from Notion to Haystack Documents. Useful for indexing Pipelines.
NVIDIA
Use NVIDIA models with Haystack.
Ollama
Use Ollama models with Haystack. Ollama allows you to get up and running with large language models, locally.
OpenAI
Use OpenAI Models with Haystack
OpenSearch
A Document Store for storing and retrieval from OpenSearch
Optimum
High-performance inference using Hugging Face Optimum
pgvector
A Document Store for storing and retrieval from pgvector
Pinecone
Use a Pinecone database with Haystack
Qdrant
Use the Qdrant vector database with Haystack
Ragas
Use the Ragas evaluation framework to calculate model-based metrics
Ray
Run and scale Haystack Pipelines with Ray in distributed manner
ReadMeDocs Fetcher
Fetch documentation pages from ReadMe docs sites.
Snowflake
A Snowflake integration that allows table retrieval from a Snowflake database.
AnswerToSpeech & DocumentToSpeech
Convert Haystack Answers and Documents to audio files
Titan Takeoff Inference Server
Use Titan Takeoff to run local open-source LLMs with Haystack. Titan Takeoff allows you to run the latest models from Meta, Mistral and Alphabet directly in your laptop.
Traceloop
Evaluate and monitor the quality of your LLM apps and agents
Unstructured File Converter
Component to easily convert files and directories into Documents using the Unstructured API
UpTrain
Use the UpTrain evaluation framework to calculate model-based metrics
vLLM Invocation Layer
Use the vLLM inference engine with Haystack
Voyage AI
A component for computing embeddings using Voyage AI embedding models - built for Haystack 2.0.
Weaviate
Use a Weaviate database with Haystack