2025-06-16 –, Palais Atelier
Mixture of Encoders is a vector-native alternative that models both structured and unstructured data in a unified embedding space. We will introduce the method, show how it powers natural language search and real-time recommendations, and share open-source tools and benchmarks for replacing complex hybrid stacks.
Filters, hybrid search, rank fusion, re-ranking. Most retrieval systems today are stitched together from separate components, each tuned in isolation. There is no systematic way to integrate structured data into vector search. Ask anyone maintaining a mature Elasticsearch deployment with 100+ boosts and hand-written scoring logic whether they can still evaluate retrieval quality end to end and iterate quickly. The answer is almost always no.
To address this, you need models that understand both your unstructured and your structured data. That includes numeric, categorical, relational, spatial, and temporal metadata, all of which are critical for powering modern search, recommendations, and agentic retrieval systems. These signals drive both end-user precision and business impact. At M&S(Marks and Spencer), we solved this problem using a set of custom pipelines, but the process required significant development effort and lacked a unified framework. There is a better way.
We call our approach the Mixture of Encoders. It is a vector-native alternative to hybrid search that brings structure to retrieval by embedding each data type with a specialised encoder and composing them into a unified vector space. Text, images, categories, numerical features, and contextual signals all become searchable through a single query. This enables nuanced, real-time retrieval across modalities without relying on filters or post-processing stages.
In this talk, we will introduce the technique and show how it supports natural language query decomposition, dynamic modality weighting, and session-aware ranking, all within a single retrieval step. We will share how this approach has been deployed in production, powering retrieval in high-churn environments and contributing over $10M in incremental revenue through improved discovery and recommendation quality. To support adoption, we are also releasing open source datasets for benchmarking real-world information retrieval tasks, along with open source demo implementations that show how to apply the Mixture of Encoders to your own data and use cases.
This talk is sponsored by Superlinked
Search, Data Science
Level:Intermediate
Filip Makraduli is a machine learning engineer and developer advocate with a strong background in AI systems, vector search, and large language models (LLMs). He holds a Master’s degree in Biomedical Data Science from Imperial College London. Currently, Filip works as a founding developer relations engineer at Superlinked, where he focuses on building real-time, multi-attribute search and recommendation systems. His work emphasizes the use of multi-encoder architectures to enhance retrieval quality and reduce reliance on reranking strategies. In the past, Filip worked as a data scientist at Marks & Spencer, where he contributed to AI-driven solutions for retail. He has also held machine learning engineering roles across several UK-based startups, focusing on applied AI and product-oriented ML development. In addition to his industry work, Filip has been active in the open-source community, particularly around LLM tooling and pipelines. He has delivered various talks on practical machine learning applications, including a presentation on AI-powered music recommendation systems titled “When music just doesn’t match our vibe, can AI help?” Filip is passionate about bridging the gap between cutting-edge AI research and real-world applications, particularly in the areas of personalization, search, and recommendation systems. He also has a strong interest in the business side of technology, especially how product, research, and engineering decisions align with go-to-market strategies, developer adoption, and long-term commercial value.