DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

LocalWhisperTranscriber

Use LocalWhisperTranscriber to transcribe audio files using OpenAI's Whisper model using your local installation of Whisper.

NameLocalWhisperTranscriber
Folder Path/audio/
Most common Position in a PipelineAs the first component in an indexing pipeline
Mandatory Input variablesβ€œaudio_files”: List of paths or binary streams that you want to transcribe
Output variablesβ€œdocuments”: List of Documents

Overview

The component also needs to know which Whisper model to work with. Specify this when initializing the component in the model parameter.

See other optional parameters you can specify in our API documentation.

See the Whisper API documentation and the official Whisper GitHub repo for the supported audio formats and languages.

To work with the LocalWhisperTranscriber, install torch and Whisper first with the following commands:

pip install transformers[torch]
pip install -U openai-whisper

Usage

On its own

Here’s an example of how to use LocalWhisperTranscriber on its own:

from haystack.components.audio import LocalWhisperTranscriber

whisper = LocalWhisperTranscriber(model="small")
whisper.warm_up()
transcription = whisper.run(audio_files=["path/to/audio/file"])

In a Pipeline

This example shows an indexing Pipeline that takes audio files, transcribes them, and then stores the text as documents in a document store. β€œ.” needs to be a directory that contains only audio files.

from pathlib import Path
from haystack import Pipeline
from haystack.components.audio import LocalWhisperTranscriber
from haystack.components.preprocessors import DocumentSplitter, DocumentCleaner
from haystack.components.writers import DocumentWriter
from haystack.document_stores.in_memory import InMemoryDocumentStore

document_store = InMemoryDocumentStore()
p = Pipeline()
p.add_component(instance=LocalWhisperTranscriber(model="small"), name="transcriber")
p.add_component(instance=DocumentCleaner(), name="cleaner")
p.add_component(
    instance=DocumentSplitter(split_by="sentence", split_length=10), name="splitter"
)
p.add_component(instance=DocumentWriter(document_store=document_store), name="writer")

p.connect("transcriber.documents", "cleaner.documents")
p.connect("cleaner.documents", "splitter.documents")
p.connect("splitter.documents", "writer.documents")

p.run({"transcriber": {"sources": list(Path(".").iterdir())}})

Related Links

See the parameters details in our API reference: