The Reader, also known as Open-Domain QA systems in Machine Learning speak, is the core component that enables Haystack to find the answers that you need. Haystack’s Readers are:
built on the latest transformer based language models
strong in their grasp of semantics
sensitive to syntactic structure
state-of-the-art in QA tasks like SQuAD and Natural Questions
Tip: The Finder class is being deprecated and has been replaced by a more powerful Pipelines class.
While these models can work on CPU, it is recommended that they are run using GPUs to keep query times low.
Choosing the Right Model
In Haystack, you can start using pretrained QA models simply by providing its HuggingFace Model Hub name to the Reader. The loading of model weights is handled by Haystack, and you have the option of using the QA pipeline from deepset FARM or HuggingFace Transformers (see FARM vs Transformers for details).
Currently, there are a lot of different models out there and it can be rather overwhelming trying to pick the one that fits your use case. To get you started, we have a few recommendations for you to try out.
All-rounder: In the class of base sized models trained on SQuAD, RoBERTa has shown better performance than BERT and can be capably handled by any machine equipped with a single NVidia V100 GPU. We recommend this as the starting point for anyone wanting to create a performant and computationally reasonable instance of Haystack.
Built for Speed: If speed and GPU memory are more of a priority to you than accuracy, you should try the MiniLM model. It is a smaller model that is trained to mimic larger models through the distillation process, and it outperforms the BERT base on SQuAD even though it is about 40% smaller.
State of the Art Accuracy: For most, ALBERT XXL will be too large to feasibly work with. But if performance is your sole concern, and you have the computational resources, you might like to try ALBERT XXL which has set SoTA performance on SQuAD 2.0.
Deeper Dive: FARM vs Transformers
Apart from the model weights, Haystack Readers contain all the components found in end-to-end open domain QA systems. This includes tokenization, embedding computation, span prediction and candidate aggregation. While the handling of model weights is the same between the FARM and Transformers libraries, their QA pipelines differ in some ways. The major points are:
The TransformersReader will sometimes predict the same span twice while duplicates are removed in the FARMReader
The FARMReader currently uses the tokenizers from the HuggingFace Transformers library while the TransformersReader uses the tokenizers from the HuggingFace Tokenizers library
Start and end logits are normalized per passage and multiplied in the TransformersReader while they are summed and not normalised in the FARMReader
If you’re interested in the finer details of these points, have a look at this GitHub comment.
We see value in maintaining both kinds of Readers since Transformers is a very familiar library to many of Haystack’s users but we at deepset can more easily update and optimise the FARM pipeline for speed and performance.
Haystack also has a close integration with FARM which means that you can further fine-tune your Readers on labelled data using a FARMReader. See our tutorials for an end-to-end example or below for a shortened example.
# Initialise Readermodel = "deepset/roberta-base-squad2"reader = FARMReader(model)# Perform finetuningtrain_data = "PATH/TO_YOUR/TRAIN_DATA"train_filename = "train.json"save_dir = "finetuned_model"reader.train(train_data, train_filename, save_dir=save_dir)# Loadfinetuned_reader = FARMReader(save_dir)
Deeper Dive: From Language Model to Haystack Reader
Language models form the core of most modern NLP systems and that includes the Readers in Haystack. They build a general understanding of language when performing training tasks such as Masked Language Modeling or Replaced Token Detection on large amounts of text. Well trained language models capture the word distribution in one or more languages but more importantly, convert input text into a set of word vectors that capture elements of syntax and semantics.
In order to convert a language model into a Reader model, it needs first to be trained on a Question Answering dataset.
To do so requires the addition of a question answering prediction head on top of the language model.
The task can be thought of as a token classification task where every input token is assigned a probability of being
either the start or end token of the correct answer.
In cases where the answer is not contained within the passage, the prediction head is also expected to return a
Since language models are limited in the number of tokens which they can process in a single forward pass,
a sliding window mechanism is implemented to handle variable length documents.
This functions by slicing the document into overlapping passages of (approximately)
that are each offset by
doc_stride number of tokens.
These can be set when the Reader is initialized.
Predictions are made on each individual passage and the process of aggregation picks the best candidates across all passages. If you’d like to learn more about what is happening behind the scenes, have a look at this article.