Haystack Europe 2018

The Search Relevance Conference! Sponsored by OpenSource Connections and Flax

Haystack is the conference for improving search relevance. If you're like us, you work to understand the shiny new tools or dense academic papers out there that promise the moon. Then you puzzle how to apply those insights to your search problem, in your search stack. But the path isn't always easy, and the promised gains don't always materialize.

Haystack is the no-holds-barred conference for organizations where search, matching, and relevance really matters to the bottom line. For search managers, developers & data scientists finding ways to innovate, see past the silver bullets, and share what actually has worked well for their unique problems. Please come share and learn!

Haystack Europe 2018 is organised by relevance specialists and partners OpenSource Connections and Flax


9:00-9:30
Doors Open, Registration
9:30-9:45
Welcome
Charlie Hull, Flax
9:45-10:25
Keynote - SOLR-8542
Doug Turnbull, OpenSource Connections

Adding Learning to Rank to Solr has provided tremendous benefit to the search relevance community. Why did Bloomberg spend so much to just give it all away? See this 'one cool trick' that creates real extensibility, reuse, and maintainability for your org's software initiatives.

10:25-11:05
Getting started with search tuning and search relevance
Karen Renshaw, Grainger Global Online

Search Tuning and Relevance can seem daunting. Search gets a lot of noise from around the organization. Everyone has an opinion; everyone thinks it’s easy but it's a long term investment. Drawing from her many years experience managing search teams, Karen will cover how to get started, set objectives, manage expectations around the organization, how to consider the holistic search experience and how to measure and understand results and create ongoing plans. More »

11:05-11:25
Coffee break
11:25-12:05
Visualizing search results
Sebastian Russ, Tudock

Demystifying onsite search by creating transparency for our clients is our main focus and motivation. Our clients face challenges like a lack of transparency in onsite search especially in terms of ranking and search result quality. There might be a sudden change in staff managing onsite search, a general lack of internal ressources dedicated to search and old, undocumented artifacts that can lead to confusion, frustration and wrong assumptions. More »

12:05-12:45
Search quality evaluation: tools and techniques
Alessandro Benedetti & Andrea Gazzarini, Sease

Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure. More »

12:45-1:30
Lunch
1:30-2:10
Learning Learning To Rank
Torsten Köster & Fabian Klenk (Shopping 24), René Kriegler (Freelancer)

At Shopping24, we have recently started to apply machine learning to the search result ranking on our Solr-based product search platform. We could easily train a ranking model using open source software and deploy it to Solr. However, we soon realised that this was only the easier part of it and that we had to put our efforts into the tasks and processes that empower us to train a successful model, such as: gathering valid training data, preparing judgement lists, feature engineering, expectation management, computing offline search quality metrics and- connecting offline and online metrics through A/B testing. More »

2:10-2:50
From user actions to better rankings: Challenges of using search quality feedback for learning to rank
Agnes Van Belle, TextKernel

In this talk we’ll describe how we used different types of user feedback (both implicit and explicit) to improve search products for matching vacancies to CVs by Learning to Rank (LTR). We will focus on the pitfalls and surprising results we encountered when trying to leverage both types of feedback for LTR. Although there is a variety of literature about how to set up a system for explicit annotations, as well as much literature on how to model user click behaviour in search engines, the goal of using such feedback for training a reranker is often not targeted. More »

2:50-3:10
Tea break
3:10-3:50
A visual approach to search strategy formulation
Tony Russell-Rose, UXLabs

Knowledge workers (such as healthcare information professionals, patent agents and legal researchers) need to undertake complex search tasks to identify relevant documents and insights within large domain-specific repositories and collections. The traditional solution is to use line-by-line query builders offered by proprietary database vendors. However, these offer limited support for error checking or query optimization, and their output can often be compromised by errors and inefficiencies. More »

3:50-4:40
Panel
Hosted by René Kriegler
4:40-5:00
Closing Remarks
Charlie Hull, Doug Turnbull
5:15-7:15 (approx)
Drinks reception (venue to be announced)