DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

AzureOCRDocumentConverter

AzureOCRDocumentConverter converts files to Documents using Azure's Document Intelligence service. It supports the following file formats: PDF (both searchable and image-only), JPEG, PNG, BMP, TIFF, DOCX, XLSX, PPTX, and HTML.

NameAzureOCRDocumentConverter
Folder Path/converters/
Position in a PipelineBefore PreProcessors, or right at the beginning of an indexing Pipeline
Inputs"sources": a list of file paths
Outputs"documents": a list of Documents
"raw_azure_response": a list of raw responses from Azure

Overview

AzureOCRDocumentConverter takes a list of file paths or ByteStream objects as input and uses Azure services to convert the files to a list of Documents. Optionally, metadata can be attached to the Documents through the meta input parameter. You need an active Azure account and a Document Intelligence or Cognitive Services resource to use this integration. Follow the steps described in the Azure documentation to set up your resource.

The component uses an AZURE_AI_API_KEYΒ environment variable by default. Otherwise, you can pass an api_key at initialization – see code examples below.

When you initialize the component, you can optionally set the model_id, which refers to the model you want to use. Please refer to Azure documentation for a list of available models. The default model is "prebuilt-read".

The AzureOCRDocumentConverter doesn’t extract the tables from a file as plain text but generates separate Document objects of typeΒ tableΒ that maintain the two-dimensional structure of the tables.

Usage

You need to install azure-ai-formrecognizer package to use the AzureOCRDocumentConverter:

pip install "azure-ai-formrecognizer>=3.2.0b2"

On its own

from haystack.components.converters import AzureOCRDocumentConverter

converter = AzureOCRDocumentConverter(
    endpoint="azure_resource_url",
    api_key=Secret.from_token("<your-api-key>")
)

docs = converter.run(sources=[Path("my_file.pdf")])

In a Pipeline

from haystack import Pipeline
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.converters import AzureOCRDocumentConverter
from haystack.components.preprocessors import DocumentCleaner
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.writers import DocumentWriter

document_store = InMemoryDocumentStore()

pipeline = Pipeline()
pipeline.add_component("converter", AzureOCRDocumentConverter(endpoint="azure_resource_url", api_key=Secret.from_token("<your-api-key>")))
pipeline.add_component("cleaner", DocumentCleaner())
pipeline.add_component("splitter", DocumentSplitter(split_by="sentence", split_length=5))
pipeline.add_component("writer", DocumentWriter(document_store=document_store))
pipeline.connect("converter", "cleaner")
pipeline.connect("cleaner", "splitter")
pipeline.connect("splitter", "writer")

pipeline.run({"converter": {"sources": file_names}})

Related Links

See the parameters details in our API reference: