Haystack docs home page

Evaluation

Haystack has all the tools needed to evaluate whole pipelines or individual nodes, such as Retrievers, Readers and Generators. Evaluation and the metrics that it generates are vital for:

  • judging how well your system is performing on a given domain.
  • comparing the performance of different models
  • identifying underperforming components in your pipeline

Tutorial: This documentation page is meant to give an in depth understanding of the concepts involved in evaluation. To get started using Haystack for evaluation, we recommend having a look at our evaluation tutorial

Integrated vs Isolated Node Evaluation

We distinguish between two evaluation modes for Pipelines: integrated and isolated node evaluation.

In integrated evaluation, a node receives the predictions from the preceding node as input. It shows what performance users can expect when running the pipeline and it is the default when calling pipeline.eval().

In isolated evaluation, a node is isolated from the predictions of the preceding node. Instead, it receives ground-truth labels as input. It shows the maximum performance of a node if it receives the perfect input from the preceding node. It can be activated with pipeline.eval(add_isolated_node_eval=True).

For example, in an ExtractiveQAPipeline comprised of a Retriever and a Reader, isolated evaluation would measure the upper bound of the Reader's performance - that is the performance of the Reader assuming that the Retriever passes on all relevant documents.

If the isolated evaluation result of a given node differs significantly from its integrated evaluation result, we suggest that you focus on improving the preceding node in order to get the best results from your pipeline. If the difference is small, you would need to focus on improving this node itself to improve the pipeline's overall result quality.

Comparison to Open and Closed Domain Question Answering

Integrated evaluation on an ExtractiveQAPipeline is equivalent to open domain question answering. In this setting, QA is performed over multiple documents, typically an entire database, and the relevant documents need first to be identified.

In contrast, isolated evaluation of a Reader node is equivalent to closed domain question answering. Here, the question is being asked on a single document. There is no retrieval step involved as the relevant document is already given.

When using pipeline.eval() in either integrated or isolated modes, Haystack evaluates the correctness of an extracted answer by looking for a match or overlap between the answer and prediction strings. Even if the predicted answer is extracted from a different position than the correct answer, that's fine, as long as the strings match.

Evaluating Information Retrieval with BEIR

Haystack has an integration with the Benchmarking Information Retrieval (BEIR) tool that you can use to assess the quality of your information retrieval pipelines. BEIR contains preprocessed datasets for evaluating retrieval models in 17 languages.

To use BEIR, install Haystack with BEIR dependency:

pip install farm-haystack[beir]

After that, you can evaluate your pipelines by calling Pipeline.eval_beir(). Here is an example of how to run BEIR evaluation for DocumentSearchPipeline:

from haystack.pipelines import DocumentSearchPipeline, Pipeline
from haystack.nodes import TextConverter, ElasticsearchRetriever
from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
text_converter = TextConverter()
document_store = ElasticsearchDocumentStore(search_fields=["content", "name"], index="scifact_beir")
retriever = ElasticsearchRetriever(document_store=document_store, top_k=1000)
index_pipeline = Pipeline()
index_pipeline.add_node(text_converter, name="TextConverter", inputs=["File"])
index_pipeline.add_node(document_store, name="DocumentStore", inputs=["TextConverter"])
query_pipeline = DocumentSearchPipeline(retriever=retriever)
ndcg, _map, recall, precision = Pipeline.eval_beir(
index_pipeline=index_pipeline, query_pipeline=query_pipeline, dataset="scifact"
)

For more information, see BEIR.

Metrics: Retrieval

Recall

Recall measures how many times the correct document was among the retrieved documents over a set of queries. For a single query, the output is binary: either the correct document is contained in the selection, or it is not. Over the entire dataset, the recall score amounts to a number between zero (no query retrieved the right document) and one (all queries retrieved the right documents).

In some scenarios, there can be multiple correct documents for one query. The metric recall_single_hit considers whether at least one of the correct documents is retrieved, whereas recall_multi_hit takes into account how many of the multiple correct documents for one query are retrieved.

Note that recall is affected by the number of documents that the retriever returns. If the retriever returns only one or a few documents, it is a tougher task to retrieve correct documents. Make sure to set the Retriever's top_k to an appropriate value in the pipeline that you evaluate.

Mean Reciprocal Rank (MRR)

In contrast to the recall metric, mean reciprocal rank takes the position of the top correctly retrieved document (the “rank”) into account. It does this to account for the fact that a query elicits multiple responses of varying relevance. Like recall, MRR can be a value between zero (no matches) and one (the system retrieved a correct document for all queries as the top result). For more details, check out this page

Mean Average Precision (mAP)

Mean average precision is similar to mean reciprocal rank but takes into account the position of every correctly retrieved document. Like MRR, mAP can be a value between zero (no matches) and one (the system retrieved correct documents for all top results). mAP is particularly useful in cases where there are more than one correct document to be retrieved. For more details, check out this page

Metrics: Question Answering

Exact Match (EM)

Exact match measures the proportion of cases where the predicted answer is identical to the correct answer. For example, for the annotated question answer pair “What is Haystack?" + "A question answering library in Python”, even a predicted answer like “A Python question answering library” would yield a zero score because it does not match the expected answer 100 percent.

F1

The F1 score is more forgiving and measures the word overlap between the labeled and the predicted answer. Whenever the EM is 1, F1 will also be 1. To learn more about the F1 score, check out this guide

Semantic Answer Similarity (SAS)

Semantic Answer Similarity uses a transformer-based cross-encoder architecture to evaluate the semantic similarity of two answers rather than their lexical overlap. While F1 and EM would both score “one hundred percent” as sharing zero similarity with “100 %", SAS is trained to assign this a high score. SAS is particularly useful to seek out cases where F1 doesn't give a good indication of the validity of a predicted answer.

You can already start trying out SAS in our Evaluation Tutorial. You can read more about SAS in this paper.

Datasets

Annotated datasets are crucial for evaluating the retrieval as well as the question answering capabilities of your system. Haystack is designed to work with question answering datasets that follow SQuAD format. Please check out our annotation tool if you're interested in creating your own dataset.

Data Tool: have a look at our SquadData object in haystack/squad_data.py if you'd like to manipulate SQuAD style data using Pandas dataframes.