Our Sessions

Have a look at our sessions for the next edition of the Haystack Conference on May 5-7, 2026 in Charlottesville, VA. If you like what you see, you can get a ticket here.

Filter by:

Track

No tracks available

Session Type

Search rankers coded by agents

Talk
6. May 2026, 03:20 pm - 04:05 pm
Main Stage
Could AI code generation replace Learning to Rank? AI coding tools can generate rankers, but only up to a point. What techniques matter when building an agent coded ranker? And where do traditional search techniques still work?

Learning to Understand: A Missing Stage of Modern Retrieval

Talk
6. May 2026, 10:20 am - 11:05 am
Main Stage
We introduce “Learning to Understand” as a corollary to the well known “Learning to Rank” process. By using evals to learn domain-specific query interpretation and rewriting rules and combining with semantic statistics from your index, it’s possible to significantly improve search quality beyond typical BM25, vector, and hybrid search techniques.

Agentic Tuning: Search Relevance on Autopilot

Talk
7. May 2026, 09:15 am - 10:00 am
Main Stage
Search relevance tuning is notoriously difficult, often requiring a deep understanding of Lucene scoring, complex query DSLs, and iterative manual testing. This session introduces Agentic Relevance Tuning—a framework that leverages LLM-based agents to automate the full search lifecycle making search tuning faster, more accurate, and accessible.

From 0 – Production with BBQ at GitHub

Talk
6. May 2026, 11:15 am - 12:00 pm
Main Stage
Rolling out semantic search is easy right? Just turn on some vectors and bim bam boom you have vector search... Right? Turns out when you're GitHub sized it's not quite that easy. We'll walk through the process we took, the lessons we've learned, and how you can build a plan to deploy vector search easier.

Adaptive Relevance with Agentic Search

Talk
7. May 2026, 03:05 pm - 03:50 pm
Main Stage
Traditional search pipelines rely heavily on static query parsing and after-the-fact relevance analysis. In this session, we present a new paradigm: using LangGraph with OpenSearch to create an agentic-based system that can tune hybrid search in real time.

Managing Search Teams: Field Stories & Practical Takeaways

Talk
7. May 2026, 02:05 pm - 02:50 pm
Main Stage
Even though search teams are structured differently across the industry, they share common challenges like balancing learning with delivery and nurturing a culture built for continuous iteration. This talk distills a decade of organizational lessons from building Yelp’s AI-powered search into repeatable patterns for any team facing similar hurdles.

Lightning Talks

Talk
6. May 2026, 04:10 pm - 05:25 pm
Main Stage
Lightning Talks

Closing

Talk
7. May 2026, 04:45 pm - 05:00 pm
Main Stage
Closing

Welcome

Talk
6. May 2026, 09:00 am - 09:30 am
Main Stage
Welcome

AI Governance: Crafting Your Own AI Experiences

Talk
6. May 2026, 09:30 am - 10:15 am
Main Stage
In a world where AI shapes what we see, think, and do, true enginering lies not in using tools—but in designing them.

Why your B2B search engine doesn’t understand your users

Talk
6. May 2026, 02:20 pm - 03:05 pm
Main Stage
This talk uses a real-world B2B search case to show how a decision-based tree helps quickly diagnose why search fails and how to improve relevance without rebuilding the system.

When BM25 Scores Disagree: A Corpus-Independent Alternative

Talk
7. May 2026, 04:00 pm - 04:45 pm
Main Stage
In distributed search, BM25 returns different results across nodes because IDF and average document length vary with each node's corpus state. StableTfl replaces these with a term-length rarity heuristic, eliminating all corpus dependency. On 22 BEIR datasets, it retains ~90% of BM25's NDCG@10 while guaranteeing identical rankings across nodes.

Do we still need search engines?

Talk
7. May 2026, 01:15 pm - 02:00 pm
Main Stage
Search has a new User Interface! All search will be Agentic/RAG and delivered through a chat interface! The days of the monolithic search engine are over!

LLMs as Rerankers: A Case Study on Hybrid Email Search

Talk
7. May 2026, 11:00 am - 11:45 am
Main Stage
Purpose-built rerankers are faster and cheaper, but are they better? We argue LLM rerankers win on what matters most in production: instruction-following and iteration speed, with more-than-acceptable tradeoffs on cost and latency. Our discussion is backed by a case study from Superhuman's production hybrid email search system.

Evolution of Relevance Engineering to Context Engineering

Talk
7. May 2026, 10:05 am - 10:50 am
Main Stage
As search powers RAG and agentic systems, relevance goals shift from ranking documents to assembling effective context. This talk explores how traditional lexical, semantic, and hybrid relevance changes when feeding LLMs, with lessons on chunking and snippet extraction, diversification, evaluation, and more.