DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

OpenAIGenerator

OpenAIGenerator enables text generation using OpenAI's large language models (LLMs).

NameOpenAIGenerator
Folder Path/generators/openai
Most common Position in a PipelineAfter a PromptBuilder
Mandatory Input variablesβ€œprompt”: a string containing the prompt for the LLM
Output variablesβ€œreplies”: a list of strings with all the replies generated by the LLM

”meta”: a list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and so on

Overview

OpenAIGenerator supports OpenAI models starting from gpt-3.5-turbo and later (gpt-4, gpt-4-turbo, and so on).

OpenAIGenerator needs an OpenAI key to work. It uses an OPENAI_API_KEYΒ environment variable by default. Otherwise, you can pass an API key at initialization with api_key:

generator = OpenAIGenerator(api_key=Secret.from_token("<your-api-key>"), model="gpt-3.5-turbo")

Then, the component needs a prompt to operate, but you can pass any text generation parameters valid for the openai.ChatCompletion.create method directly to this component using the generation_kwargs parameter, both at initialization and to run() method. For more details on the parameters supported by the OpenAI API, refer to the OpenAI documentation.

OpenAIGenerator supports custom deployments of your OpenAI models through the api_base_url init parameter.

Streaming

OpenAIGenerator supports streaming the tokens from the LLM directly in output. To do so, pass a function to the streaming_callback init parameter. Note that streaming the tokens is only compatible with generating a single response, so n must be set to 1 for streaming to work.

πŸ“˜

This component is designed for text generation, not for chat. If you want to use OpenAI LLMs for chat, use OpenAIChatGenerator instead.

Usage

On its own

Basic usage:

from haystack.components.generators import OpenAIGenerator

client = OpenAIGenerator(model="gpt-4", api_key=Secret.from_token("<your-api-key>"))
response = client.run("What's Natural Language Processing? Be brief.")
print(response)

>>> {'replies': ['Natural Language Processing, often abbreviated as NLP, is a field 
    of artificial intelligence that focuses on the interaction between computers 
    and humans through natural language. The primary aim of NLP is to enable 
    computers to understand, interpret, and generate human language in a valuable way.'], 
    'meta': [{'model': 'gpt-4-0613', 'index': 0, 'finish_reason': 
    'stop', 'usage': {'prompt_tokens': 16, 'completion_tokens': 53, 
    'total_tokens': 69}}]}

With streaming:

from haystack.components.generators import OpenAIGenerator

client = OpenAIGenerator(streaming_callback=lambda chunk: print(chunk.content, end="", flush=True))
response = client.run("What's Natural Language Processing? Be brief.")
print(response)

>>> Natural Language Processing (NLP) is a branch of artificial 
	intelligence that focuses on the interaction between computers and human 
  language. It involves enabling computers to understand, interpret,and respond 
  to natural human language in a way that is both meaningful and useful.
>>> {'replies': ['Natural Language Processing (NLP) is a branch of artificial 
	intelligence that focuses on the interaction between computers and human 
  language. It involves enabling computers to understand, interpret,and respond 
  to natural human language in a way that is both meaningful and useful.'], 
  'meta': [{'model': 'gpt-3.5-turbo-0613', 'index': 0, 'finish_reason': 
  'stop', 'usage': {'prompt_tokens': 16, 'completion_tokens': 49, 
  'total_tokens': 65}}]}

In a Pipeline

Here's an example of RAG Pipeline:

from haystack import Pipeline
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack.components.generators import OpenAIGenerator
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack import Document

docstore = InMemoryDocumentStore()
docstore.write_documents([Document(content="Rome is the capital of Italy"), Document(content="Paris is the capital of France")])

query = "What is the capital of France?"

template = """
Given the following information, answer the question.

Context: 
{% for document in documents %}
    {{ document.content }}
{% endfor %}

Question: {{ query }}?
"""
pipe = Pipeline()

pipe.add_component("retriever", InMemoryBM25Retriever(document_store=docstore))
pipe.add_component("prompt_builder", PromptBuilder(template=template))
pipe.add_component("llm", OpenAIGenerator(api_key=Secret.from_token("<your-api-key>"))
pipe.connect("retriever", "prompt_builder.documents")
pipe.connect("prompt_builder", "llm")

res=pipe.run({
    "prompt_builder": {
        "query": query
    },
    "retriever": {
        "query": query
    }
})

print(res)

Related Links

See parameters details in our API reference: