While extractive QA highlights the span of text that answers a query, generative QA can return a novel text answer that it has composed.
The best current approaches, such as Retriever-Augmented Generation and LFQA, can draw upon both the knowledge it gained during language model pretraining (parametric memory) as well as passages provided to it with a retriever (non-parametric memory).
With the advent of Transformer based retrieval methods such as Dense Passage Retrieval, retriever and generator can be trained concurrently from the one loss signal.
- More appropriately phrased answers
- Able to synthesize information from different texts
- Can draw on latent knowledge stored in language model
- Not easy to track what piece of information the generator is basing its response off of
Initialize a Generator as follows:
from haystack.nodes import RAGeneratorgenerator = RAGenerator(model_name_or_path="facebook/rag-sequence-nq",retriever=dpr_retriever,top_k=1,min_length=2)
Running a Generator in a pipeline:
from haystack.pipelines import GenerativeQAPipelinepipeline = GenerativeQAPipeline(generator=generator, retriever=dpr_retriever)result = pipelines.run(query='What are the best party games for adults?', top_k_retriever=20)
Running a stand-alone Generator:
result = generator.predict(query='What are the best party games for adults?',documents=[doc1, doc2, doc3...],top_k=top_k)