DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

HuggingFaceTEITextEmbedder

This component computes embeddings for text using the TEI library.

NameHuggingFaceTEITextEmbedder
Folder Path/embedders/
Most common Position in a PipelineBefore an embedding Retriever in a Query/RAG Pipeline
Mandatory Input variablesβ€œtext”: a string
Output variablesβ€œembedding”: a List of float numbers

This component should be used to embed plain text. To embed a list of Documents, you should use HuggingFaceTEIDocumentEmbedder.

Overview

This component is designed to compute embeddings using theΒ Text Embeddings Inference (TEI)Β library. TEI is a toolkit for deploying and serving open source text embedding models with high performance on both GPU and CPU.
TEI has a permissive but not fully open source license.

The component uses aΒ HF_API_TOKENΒ environment variable by default. Otherwise, you can pass a Hugging Face API token at initialization withΒ token – see code examples below.
The token is needed:

  • If you use the Inference API
  • If you use the Inference Endpoints
  • If you use a self-hosted TEI endpoint with a private/gated model

If you use a self-hosted TEI endpoint with a totally open model, the token is not required.

Key Features

  • Hugging Face Inference Endpoints.Β Supports usage of embedding models deployed on Hugging Face Inference endpoints.
  • Inference API Support.Β Supports usage of embedding models hosted on the rate-limited Inference API tier. Discover available LLMs using the following command:Β wget -qO- https://api-inference.huggingface.co/framework/sentence-transformers,Β and use the model ID as the model parameter for this component. You'll also need to provide a valid Hugging Face API token as the token parameter. (This solution is only suitable for experimental purposes)
  • Custom TEI Endpoints.Β Supports usage of embedding models deployed on custom TEI endpoints. A custom TEI endpoint can be easily run using Docker (TEI documentation).

πŸ“˜

More Information

Usage

On its own

You can use this component for embedding models hosted on Hugging Face Inference endpoints, the rate-limited Inference API tier:

from haystack.components.embedders import HuggingFaceTEITextEmbedder
from haystack.utils import Secret

text_to_embed = "I love pizza!"

text_embedder = HuggingFaceTEITextEmbedder(
    model="BAAI/bge-small-en-v1.5", token=Secret.from_token("<your-api-key>")
)

print(text_embedder.run(text_to_embed))

# {'embedding': [0.017020374536514282, -0.023255806416273117, ...],

For embedding models hosted on paid https://huggingface.co/inference-endpoints endpoint and/or your own custom TEI endpoint. In these two cases, you'll need to provide the URL of the endpoint. In case you use the Inference Endpoints or a self-hosted endpoint with a private/gated model, you also need to pass a valid token.

from haystack.components.embedders import HuggingFaceTEITextEmbedder

text_to_embed = "I love pizza!"

text_embedder = HuggingFaceTEITextEmbedder(
    model="BAAI/bge-small-en-v1.5", url="<your-tei-endpoint-url>", token=Secret.from_token("<your-api-key>")
)

print(text_embedder.run(text_to_embed))

# {'embedding': [0.017020374536514282, -0.023255806416273117, ...],

In a Pipeline

from haystack import Document
from haystack import Pipeline
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.embedders import HuggingFaceTEITextEmbedder, HuggingFaceTEIDocumentEmbedder
from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever

document_store = InMemoryDocumentStore(embedding_similarity_function="cosine")

documents = [Document(content="My name is Wolfgang and I live in Berlin"),
             Document(content="I saw a black horse running"),
             Document(content="Germany has many big cities")]

document_embedder = HuggingFaceTEIDocumentEmbedder()
document_embedder.warm_up()
documents_with_embeddings = document_embedder.run(documents)['documents']
document_store.write_documents(documents_with_embeddings)

query_pipeline = Pipeline()
query_pipeline.add_component("text_embedder", HuggingFaceTEITextEmbedder())
query_pipeline.add_component("retriever", InMemoryEmbeddingRetriever(document_store=document_store))
query_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")

query = "Who lives in Berlin?"

result = query_pipeline.run({"text_embedder":{"text": query}})

print(result['retriever']['documents'][0])

# Document(id=..., mimetype: 'text/plain', 
#  text: 'My name is Wolfgang and I live in Berlin')

Related Links