2025-06-16 –, Palais Atelier
Modern applications demand search capabilities that go beyond basic text matching—they need to be fast, accurate, personalized, and context-aware. This session demonstrates how OpenSearch's latest AI/ML enhancements and engine improvements enable organizations to build intelligent, scalable search experiences that meet these evolving needs.
Observability, log analytics, GenAI systems, and RAG pipelines must query massive volumes of semantic embeddings to retrieve relevant content instantly. Today’s search systems often fall short in handling high-dimensional vector data and similarity search.
OpenSearch 3.0 brings significant architectural improvements to address these challenges. Integrating Apache Lucene 10 and JVM 21, the platform delivers 20% faster queries than its 2.x predecessor and 10× the throughput of 1.x versions. New features like GPU-accelerated vector indexing and concurrent segment search dramatically improve k-NN query performance while reducing operational costs.
The platform's expanded AI capabilities now include an advanced Vector Engine for approximate k-NN searches and neural sparse search for efficient text indexing. These innovations, combined with optimized embedding ingestion and query-time pruning, enable organizations to build performant, cost-effective search solutions that scale with their needs.
We'll explore practical applications of these features, demonstrating how OpenSearch 3.0 powers the next generation of AI-driven search experiences.
This session is sponsored by OpenSearch
Search
Level:Intermediate
Saurabh is a Software Development Manager at AWS leading the core search, release, and benchmarking areas of the OpenSearch Project. His passion lies in finding solutions for intricate challenges within large-scale distributed systems.