The Generator reads a set of documents and generates an answer to a question, word by word. While extractive QA highlights the span of text that answers a query, generative QA can return a novel text answer that it has composed.
The best current approaches, such as Retriever-Augmented Generation and LFQA, can draw upon both the knowledge it gained during language model pretraining (parametric memory) and the passages provided to it with a Retriever (non-parametric memory). With the advent of transformer-based retrieval methods such as Dense Passage Retrieval, Retriever and Generator can be trained concurrently from the one loss signal.
|Position in a Pipeline||After the Retriever, you can use it as a substitute to the Reader.|
- More appropriately phrased answers.
- Able to synthesize information from different texts.
- Can draw on latent knowledge stored in language model.
- Not easy to track what piece of information the generator is basing its response off of.
RAGenerator: Retrieval-Augmented Generator based on Hugging Face's transformers model. Its main advantages are a manageable model size and the fact that the answer generation depends on retrieved documents. This means that the model can easily adjust to domain documents even after the training is finished.
Seq2SeqGenerator: A generic sequence-to-sequence generator based on Hugging Face's transformers. You can use it with any Hugging Face language model that extends GenerationMixin. See also How to Generate Text.
To initialize a Generator, run:
from haystack.nodes import RAGeneratorgenerator = RAGenerator(model_name_or_path="facebook/rag-sequence-nq",retriever=dpr_retriever,top_k=1,min_length=2)
To run a Generator in a pipeline, run:
from haystack.pipelines import GenerativeQAPipelinepipeline = GenerativeQAPipeline(generator=generator, retriever=dpr_retriever)result = pipelines.run(query='What are the best party games for adults?', top_k_retriever=20)
To run a stand-alone Generator, run:
result = generator.predict(query='What are the best party games for adults?',documents=[doc1, doc2, doc3...],top_k=top_k)