Stop Hallucinations and Half-Truths in Generative Search

Colin Harman • Location: Theater 5 • Back to Haystack 2023

You integrated a Large Language Model into your search system to interpret, answer questions, and summarize over search results. Congratulations, you are now running Generative Search!

Then, disaster strikes: The LLM shows users a hallucinated, non-factual answer, even though your relevance system worked perfectly and you grounded the model with search results! The trust painstakingly built up by your search stack is gone in an instant, and users may abandon your platform.

Your generative search story doesn’t need to end like this. Many strategies, both pre- and post-generation, can mitigate the risk of showing users answers that are false or partly true with respect to search results. In this talk we will explore examples of the problem, understand the root causes, and dive into proven solutions.

Techniques covered include reranking, user warnings, fact-checking systems, and LLM usage patterns, prompting, and fine-tuning.

Download the Slides Watch the Video

Colin Harman

Nesh

Colin is the Head of Technology for Nesh, providing search and NLP-driven task automation to heavy industries. He loves solving "boring" problems by combining the latest NLP technologies with deep understanding of the domain.