Search rankers coded by agents

Session Abstract

Could AI code generation replace Learning to Rank? AI coding tools can generate rankers, but only up to a point. What techniques matter when building an agent coded ranker? And where do traditional search techniques still work?

Session Description

Why can’t I just go to Claude Code and say:

build a function that returns the most relevant results possible.

In this talk we’ll give an AI coding agent a few basic primitives – BM25 retrieval, vector retrieval, query to category similarity. Then we let the agent code search functions building on these tools. We measure whether relevance improves on a test and holdout and continue to iterate

Informed by data, we’ll discuss the best techniques found on several open datasets. We’ll see the promise and limitations of agentic rerankers. Where does traditional search experience still matter? Where does it fall apart? Can an approach like this replace learning to rank?

We’ll see where code generation stops being vibe coding and evolves to become actual model training – with code as the model.

Main Stage
06.May 2026
15:20pm - 16:05pm
Talk