Extracting Metadata with an LLM
Last Updated: March 10, 2025
Notebook by David S. Batista
This notebook shows how to use LLMMetadataExtractor
, we will use a arge Language Model to perform metadata extraction from a Document.
Setting Up
!pip install haystack-ai
!pip install "sentence-transformers>=3.0.0"
Let’s define what kind of metadata we want to extract from our documents, we wil do it through a LLM prompt, which will then be used by the LLMMetadataExtractor component. In this case we want to extract named-entities from our documents.
NER_PROMPT = '''
-Goal-
Given text and a list of entity types, identify all entities of those types from the text.
-Steps-
1. Identify all entities. For each identified entity, extract the following information:
- entity_name: Name of the entity, capitalized
- entity_type: One of the following types: [organization, product, service, industry]
Format each entity as a JSON like: {"entity": <entity_name>, "entity_type": <entity_type>}
2. Return output in a single list with all the entities identified in steps 1.
-Examples-
######################
Example 1:
entity_types: [organization, person, partnership, financial metric, product, service, industry, investment strategy, market trend]
text: Another area of strength is our co-brand issuance. Visa is the primary network partner for eight of the top
10 co-brand partnerships in the US today and we are pleased that Visa has finalized a multi-year extension of
our successful credit co-branded partnership with Alaska Airlines, a portfolio that benefits from a loyal customer
base and high cross-border usage.
We have also had significant co-brand momentum in CEMEA. First, we launched a new co-brand card in partnership
with Qatar Airways, British Airways and the National Bank of Kuwait. Second, we expanded our strong global
Marriott relationship to launch Qatar's first hospitality co-branded card with Qatar Islamic Bank. Across the
United Arab Emirates, we now have exclusive agreements with all the leading airlines marked by a recent
agreement with Emirates Skywards.
And we also signed an inaugural Airline co-brand agreement in Morocco with Royal Air Maroc. Now newer digital
issuers are equally
------------------------
output:
{"entities": [{"entity": "Visa", "entity_type": "company"}, {"entity": "Alaska Airlines", "entity_type": "company"}, {"entity": "Qatar Airways", "entity_type": "company"}, {"entity": "British Airways", "entity_type": "company"}, {"entity": "National Bank of Kuwait", "entity_type": "company"}, {"entity": "Marriott", "entity_type": "company"}, {"entity": "Qatar Islamic Bank", "entity_type": "company"}, {"entity": "Emirates Skywards", "entity_type": "company"}, {"entity": "Royal Air Maroc", "entity_type": "company"}]}
#############################
-Real Data-
######################
entity_types: [company, organization, person, country, product, service]
text: {{ document.content }}
######################
output:
'''
Let’s initialise an instance of the LLMMetadataExtractor
using OpenAI as the LLM provider and the prompt defined above to perform metadata extraction
from haystack.components.extractors.llm_metadata_extractor import LLMMetadataExtractor
/Users/dsbatista/haystack-cookbook/.venv/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
We will also need to set the OPENAI_API_KEY
import os
from getpass import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass("Enter OpenAI API key:")
We will instatiate a LLMMetadataExtractor
instance using the OpenAI as LLM provider. Notice that the parameter prompt
is set to the prompt we defined above, and that we also need to set which keys should be present in the JSON ouput, in this case “entities”.
Another important aspect is the raise_on_failure=False
, if for some document the LLM fails (e.g.: network error, or doesn’t return a valid JSON object) we continue the processing of all the documents in the input.
metadata_extractor = LLMMetadataExtractor(
prompt=NER_PROMPT,
generator_api="openai",
generator_api_params={
"generation_kwargs": {
"max_tokens": 500,
"temperature": 0.0,
"seed": 0,
"response_format": {"type": "json_object"},
},
"max_retries": 1,
"timeout": 60.0,
},
expected_keys=["entities"],
raise_on_failure=False,
)
Let’s define documents from which the component will extract metadata, i.e.: named-entities
from haystack import Document
docs = [
Document(content="deepset was founded in 2018 in Berlin, and is known for its Haystack framework"),
Document(content="Hugging Face is a company founded in Paris, France and is known for its Transformers library"),
Document(content="Google was founded in 1998 by Larry Page and Sergey Brin"),
Document(content="Pegeout is a French automotive manufacturer that was founded in 1810 by Jean-Pierre Peugeot"),
Document(content="Siemens is a German multinational conglomerate company headquartered in Munich and Berlin, founded in 1847 by Werner von Siemens")
]
and let’s extract :)
result = metadata_extractor.run(documents=docs)
result
{'documents': [Document(id=05fe6674dd4faf3dcaa991f9e6d520c9185d5644c4ac2b8b52276e6b70a831f2, content: 'deepset was founded in 2018 in Berlin, and is known for its Haystack framework', meta: {'entities': [{'entity': 'Deepset', 'entity_type': 'company'}, {'entity': 'Berlin', 'entity_type': 'country'}, {'entity': 'Haystack', 'entity_type': 'product'}]}),
Document(id=0327a8b44d20635b39aae701df27fdaf4d0f0a71ac1419171cde052c12305738, content: 'Hugging Face is a company founded in Paris, France and is known for its Transformers library', meta: {'entities': [{'entity': 'Hugging Face', 'entity_type': 'company'}, {'entity': 'Paris', 'entity_type': 'city'}, {'entity': 'France', 'entity_type': 'country'}, {'entity': 'Transformers', 'entity_type': 'product'}]}),
Document(id=eb4e2410115dfb7edc47b84853d0cdc845699120509346383896ed7d47354e2d, content: 'Google was founded in 1998 by Larry Page and Sergey Brin', meta: {'entities': [{'entity': 'Google', 'entity_type': 'company'}, {'entity': 'Larry Page', 'entity_type': 'person'}, {'entity': 'Sergey Brin', 'entity_type': 'person'}]}),
Document(id=6baec5b8ab9a93c62469d1bff6d1034957782e8caf85f45112ebf350249e53e6, content: 'Pegeout is a French automotive manufacturer that was founded in 1810 by Jean-Pierre Peugeot', meta: {'entities': [{'entity': 'Peugeot', 'entity_type': 'company'}, {'entity': 'Jean-Pierre Peugeot', 'entity_type': 'person'}]}),
Document(id=0a56bf794d37839113a73634cc0f3ecab33744eeea7b682b49fd2dc51737aed8, content: 'Siemens is a German multinational conglomerate company headquartered in Munich and Berlin, founded i...', meta: {'entities': [{'entity': 'Siemens', 'entity_type': 'company'}, {'entity': 'Germany', 'entity_type': 'country'}, {'entity': 'Munich', 'entity_type': 'city'}, {'entity': 'Berlin', 'entity_type': 'city'}, {'entity': 'Werner von Siemens', 'entity_type': 'person'}]})],
'failed_documents': []}
Let’s now build an indexing pipeline, where we simply give the Documents as input and get a Document Store with the documents indexed with metadata
from haystack import Pipeline
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.embedders import SentenceTransformersDocumentEmbedder
from haystack.components.writers import DocumentWriter
doc_store = InMemoryDocumentStore()
p = Pipeline()
p.add_component(instance=metadata_extractor, name="metadata_extractor")
p.add_component(instance=SentenceTransformersDocumentEmbedder(model="sentence-transformers/all-MiniLM-L6-v2"), name="embedder")
p.add_component(instance=DocumentWriter(document_store=doc_store), name="writer")
p.connect("metadata_extractor.documents", "embedder.documents")
p.connect("embedder.documents", "writer.documents")
<haystack.core.pipeline.pipeline.Pipeline object at 0x1603e4140>
π
Components
- metadata_extractor: LLMMetadataExtractor
- embedder: SentenceTransformersDocumentEmbedder
- writer: DocumentWriter
π€οΈ Connections
- metadata_extractor.documents -> embedder.documents (List[Document])
- embedder.documents -> writer.documents (List[Document])
p.run(data={"metadata_extractor": {"documents": docs}})
Batches: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:02<00:00, 2.01s/it]
{'metadata_extractor': {'failed_documents': []},
'writer': {'documents_written': 5}}
### Let's inspect the documents metadata in the document store
for doc in doc_store.storage.values():
print(doc.content)
print(doc.meta)
print("\n---------")
deepset was founded in 2018 in Berlin, and is known for its Haystack framework
{'entities': [{'entity': 'Deepset', 'entity_type': 'company'}, {'entity': 'Berlin', 'entity_type': 'city'}, {'entity': 'Haystack', 'entity_type': 'product'}]}
---------
Hugging Face is a company founded in Paris, France and is known for its Transformers library
{'entities': [{'entity': 'Hugging Face', 'entity_type': 'company'}, {'entity': 'Paris', 'entity_type': 'city'}, {'entity': 'France', 'entity_type': 'country'}, {'entity': 'Transformers', 'entity_type': 'product'}]}
---------
Google was founded in 1998 by Larry Page and Sergey Brin
{'entities': [{'entity': 'Google', 'entity_type': 'company'}, {'entity': 'Larry Page', 'entity_type': 'person'}, {'entity': 'Sergey Brin', 'entity_type': 'person'}]}
---------
Pegeout is a French automotive manufacturer that was founded in 1810 by Jean-Pierre Peugeot
{'entities': [{'entity': 'Peugeot', 'entity_type': 'company'}, {'entity': 'Jean-Pierre Peugeot', 'entity_type': 'person'}]}
---------
Siemens is a German multinational conglomerate company headquartered in Munich and Berlin, founded in 1847 by Werner von Siemens
{'entities': [{'entity': 'Siemens', 'entity_type': 'company'}, {'entity': 'Germany', 'entity_type': 'country'}, {'entity': 'Munich', 'entity_type': 'city'}, {'entity': 'Berlin', 'entity_type': 'city'}, {'entity': 'Werner von Siemens', 'entity_type': 'person'}]}
---------