🎄 Day 3: Making Answers Bright with Multi-Query RAG Magic ✨
Santa Claus realized his centuries-old tradition of spreading joy was missing something in the modern age—knowledge about the world beyond his snowy workshop. While he knew about children’s hopes and dreams from their letters, Santa felt increasingly out of touch with the broader events shaping their lives.
“How can I be a true global gift-giver,” Santa wondered, “if I don’t understand what’s happening in the world?”
He called an emergency meeting with tech-savvy Elf David, who had been itching for a project beyond sleigh upgrades. Together, they decided to build a Retrieval-Augmented Generation (RAG) system using Haystack, which Elf David had previously used to streamline toy inventory tracking. This time, they connected it to the BBC News feed to create a personal RAG assistant for Santa.
“One last thing,” Santa added, “I know about advanced retrieval methods. Use multi-query retrieval to improve recall—I want the most relevant answers!”
For this challenge, help Elf David create custom components for multi-query retrieval in the RAG pipeline.
🎯 Requirements:
- An
OpenAI API Key if you’d like to use
OpenAIGenerator
but you can choose any other LLM that is supported with Generators
💡 Some Hints:
- Check out the Creating Custom Components to learn more about how to create custom Haystack components
- Some blog post links we used on Day 1 can be helpful 😉
🧡 Here’s the Starter Colab