Haystack Search Relevance Training - April 22nd and 23rd
Training details to be held prior to Haystack conference
New to relevance? Want to solidify your foundation? Or take things to an advanced level? OpenSource Connections will be offering their "Think Like a Relevance Engineer" and Learning to Rank training prior to the conference
Available Pre-Conference Training:
New to relevance? Want to solidify your foundation? Or take things to an advanced level? OpenSource Connections will be offering their "Think Like a Relevance Engineer" training prior to the conference (April 22nd and 23rd). This course gives you a foundation for solving applied Information Retrieval problems with Solr or Elasticsearch. Training is hosted at OpenSource Connections offices
From the team behind Relevant Search - Learn to improve search quality from the best experts in the field. Bust through the mystique around topics like 'cognitive search' and 'Learning to Rank'. In two days, we teach you to measure, tune, and improve search quality. You'll appreciate relevance tuning from TF*IDF, to boosts, to Learning to Rank. You'll begin your journey to search & discovery applications that seem to 'get users' and delivers business results.
'Think Like a Relevance Engineer' has helped me think differently about how I solve Solr & Elasticsearch relevance problems
What a positive experience! We have so many new ideas to implement after attending 'Think Like a Relevance Engineer' training.
What you'll get out of it
- How to measure search quality
- Removing the fear from relevance experimentation
- Becoming 'hypothesis driven' in relevance work
- Elasticsearch relevance tuning techniques
- Building semantic search
- Implementing machine learning to improve relevance (ie Learning to Rank)
- Access to the best experts in the world on Elasticsearch relevance for brainstorming your problems
Day One - Managing, Measuring, and Testing Search Relevance
This day helps the class understand how working on relevance requires different thinking than other engineering problems. We teach you to measure search quality, take a hypothesis-driven approach to search projects, and safely 'fail fast' towards ever improving business KPIs
- What is search?
- Holding search accountable to the business
- Search quality feedback
- Hypothesis-driven relevance tuning
- User studies for search quality
- Using analytics & clicks to understand search quality
Day Two - Engineering Relevance with Elasticsearch|Solr
This day demonstrates relevance tuning techniques that actually work. Relevance can't be achieved by just tweaking field weights: Boosting strategies, synonyms, and semantic search are discussed. The day is closed introducing machine learning for search (aka "Learning to Rank").
- Getting a feel for Elasticsearch|Solr
- Signal Modeling (data modeling for relevance)
- Dealing with multiple, competing objectives in search relevance
- Synonym strategies that actually work
- Taxonomy-based Semantic Search
- Introduction to Learning to Rank
You'll also receive a copy of Relevant Search written by OpenSource Connections CTO Doug Turnbull and OpenSource Connections Alum John Berryman
Learning to Rank Training
In this two day training course offered April 22nd and 23rd, we go hands on with machine learning tools to improve relevance. Using an open source search engine, we see our models come to life, see the pitfalls of letting machines control relevance, and work to mitigate those pitfalls. From Hello World to Learning to Rank Lessons you can avoid learning the hard way. See how you can apply open source tooling to optimize your search results with machine learning. Ideal for production focused data scientists, relevance engineers, machine learning engineers, or search developers. Some familiarity with a search engine is expected.
Day One: Hands on Basics
We get hands on with movie data, and a simple judgment list. We see where things can go wrong!
- Search Relevance as an Machine Learning Problem
- Cutting your Teeth With Your First Model
- What's Wrong with My Judgments?
- Iterating on Features
- Choosing the Best Learning to Rank Model
Day Two: Real-World Learning to Rank
At scale, in a real search applications, here are the concerns that will rear themselves.
- Dealing with Presentation Bias
- Including Non-User Relevance Concerns (Business Rules and Marketplace concerns)
- Model Verification, Checks, and Balances
- Personalization and Recommendations with Learning to Rank
- Including Embeddings and Other Exotic Features
- The Next Frontiers of ML and Search
Community Training & Hacking Welcome!
Have training or a hackathon you'd like to offer prior to the conference? Contact Us to discuss how we can help you find a venue and advertise your event!