Haystack Agenda

Talks from the Search & Relevance Community at the Haystack Conference!

Day 1, Wednesday, April 24th

At-a-Glance (scroll down for a detailed schedule)

8:00-9:00am Registration and Continental Breakfast
9:00-9:45am Opening Keynote
10:00-10:45am Concurrent Session 1
11:00-11:45am Concurrent Session 2
11:45am-1:15pm Lunch (on your own)
1:15-2:00pm Concurrent Session 3
2:15-3:00pm Concurrent Session 4
3:15-4:45pm Lightning Talks
5:30-6:30pm Reception (optional, included)
6:30-8:00pm Dinner (included)
8:00-9:00
Registration and Continental Breakfast

Violet Crown Charlottesville  •  200 West Main St Charlottesville, VA 22902

9:00-9:45
What is Search Relevance?

Max Irwin

Theater 5 Keynote

Track One Track Two
10:00-10:45
Rated Ranking Evaluator: an Open Source Approach for Search Quality Evaluation

Alessandro Benedetti (Sease)

Every team working on Information Retrieval software struggles with the task of evaluating how well their system performs in terms of search quality... More »

Theater 5 Evaluation

Ontology and Oncology: NLP for Precision Medicine

Sean Mullane (University of Virginia)

This session gives an overview of the importance of precision medicine in cancer treatment and describes an approach used by UVA in the TREC 2018 Precision Medicine workshop. More »

Theater 4 NLP

11:00-11:45
Making the case for human judgement relevance testing

Tara Diedrichsen and Tito Sierra (LexisNexis)

Supporting a robust relevance testing programme using human judges represents a significant investment in terms of time and resources. However, when executed well the outputs can quickly become... More »

Theater 5 Evaluation

Autocomplete as Relevancy

Rimple Shah

Autocomplete is a staple feature for search applications. This feature (also called auto-suggest, search-as-you-type, or type-ahead) has become an expected part of an engaging, user-friendly search experience. More »

Theater 4 NLP

11:45-1:15
Lunch on your own
1:15-2:00
Towards a Learning To Rank Ecosystem @ Snag - We've got LTR to work! Now what?

Xun Wang (Snag)

As the largest online marketplace for hourly jobs in the US, Snag strives to connect millions of job seekers with part/full time, hourly and on-demand employment opportunities. More »

Theater 5 Learning to Rank

Query relaxation - a rewriting technique between search and recommendations

Rene Kriegler

In search quality optimisation, various techniques are used to improve recall, especially in order to avoid empty search result sets. More »

Theater 4 Query Rewriting

2:15-3:00
Evolution of Yelp search to a generalized ranking platform

Umesh Dangat (Yelp)

Elasticsearch forms the backbone of Yelp's core search. The Learning to Rank elasticsearch plugin is one of the key tools that has transformed the Yelp Search team... More »

Theater 5 Learning to Rank

Beyond The Search Engine: Improving Relevancy through Query Expansion

Taylor Rose (Ibotta)

Due to a variable inventory and an ephemeral data set, users often search for terms that are outside of our corpus. This leads to empty search result sets, despite... More »

Theater 4 Query Rewriting

3:15-4:45
Lightning Talks

Informal, ad-hoc, spontaneous 5 minute lightning talks on search, relevance, information retrieval, and our community! We'll line up and count down from 5 minutes :)

Theater 5

Slides:
5:30-6:30
Reception Optional

Kardinal Hall  •  722 Preston Ave, Charlottesville, VA 22903

6:30-8:00
Dinner Included

Kardinal Hall  •  722 Preston Ave, Charlottesville, VA 22903

Day 2, Thursday, April 25th

At-a-Glance (scroll down for a detailed schedule)

8:00-9:00am Continental Breakfast
9:00-9:45am Concurrent Session 5
10:00-10:45am Concurrent Session 6
11:00-11:45am Concurrent Session 7
11:45am-1:15pm Lunch (on your own)
1:15-2:00pm Concurrent Session 8
2:15-3:00pm Concurrent Session 9
3:15-4:00pm Concurrent Session 10
4:00pm Conference Closing
8:00-9:00
Breakfast service

Violet Crown Charlottesville  •  200 West Main St Charlottesville, VA 22902

Track One Track Two
9:00-9:45
Addressing variance in AB tests: Interleaved evaluation of rankers

Erik Bernhardson (Wikimedia)

Evaluation of search quality is essential for developing effective rankers. Interleaved comparison methods achieve statistical significance with less data than... More »

Theater 4 Evaluation

How The New York Times Tackles Relevance

Jeremiah Via (The New York Times)

The New York Times has had search for a long time but 2018 was the year in which the company engaged with relevance in a deep way. The aim of this talk is to share... More »

Theater 5 Use Case

10:00-10:45
Solving for Satisfaction: Introduction to Click Models

Elizabeth Haubert (OpenSource Connections)

Relevance metrics like NDGC or ERR require graded judgements to evaluate query relevance performance. But what happens when we don't know what 'good'... More »

Theater 5 Evaluation

Establishing a relevance focused culture in a large organization

Tom Burgmans (Wolters Kluwers)

For a relevance engineer one of the most difficult tasks in the tuning process is to convince others in the organization that this is a joint effort. Even the brightest... More »

Theater 4 Use Case

11:00-11:45
Custom Solr Query Parser Design Option, and Pros & Cons

Bertrand Rigaldies (OpenSource Connections)

Does your search application include a custom query syntax with various search operators such as Booleans, proximity, term or phrase frequency, capitalization, quoted text... More »

Theater 4 Misc

Architectural considerations on search relevancy in the context of e-commerce

Johannes Peter (Media Markt Saturn)

With an increasing amount of relevancy factors, relevancy fine-tuning becomes more complex as changing the impact of factors produces increasingly more... More »

Theater 5 Use Case

11:45-1:15
Lunch on your own
1:15-2:00
Improving Search Relevance with Numeric Features in Elasticsearch

Mayya Sharipova

Recently Elasticsearch has introduced a number of ways to improve search relevance of your documents based on numeric features. In this talk I will present the newly introduced field types... More »

Theater 5 Learning to Rank

Search Logs + Machine Learning = Auto-Tagging Inventory

John Berryman (Eventbrite)

For e-commerce applications, matching users with the items they want is the name of the game. If they can't find what they want then how can they buy anything?! More »

Theater 4 NLP

2:15-3:00
Learning to Rank Panel

Theater 5 Learning to Rank

View the Video

Natural Language Search with Knowledge Graphs

Trey Grainger (Lucidworks)

To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within... More »

Theater 4 NLP

3:15-4:00
Search with Vectors

Simon Hughes (Dice Holdings Inc.)

With the advent of deep learning and algorithms like word2vec and doc2vec, vectors-based representations are increasingly being used in search to represent anything from documents... More »

Theater 5 Misc

Search-based recommendations at Politico

Ryan Kohl (Politico)

Over the past year, the POLITICO team has developed a recommendation system for our users, which recommends not only news content to read but also news topics to subscribe to. More »

Theater 4 Misc

4:00
Conference Wrap-up