Zucchini or Cucumber? Benchmarking Embeddings for Similar Image Retrieval thanks to your weekly Grocery shopping

Paul-Louis Nech • Location: Theater 7 • Back to Haystack 2024

“The natural world is harsh and full of surprises, such as edible-looking food that actually kills you. For thousands of years, humans had to face hard questions for their survival, such as answering “is this mushroom a great source of nutrients or the last thing I will ever eat?” Modern-day foragers browsing InstaCart may be in a safer place compared to our unlucky ancestors. They still face similar challenges when adding something to their cart: squinting to differentiate zucchinis from cucumbers, Green Apples from Green Limes, and other similar-looking food items.

Can artificial intelligence help us solve this age-old problem? Image-based recommendations are a good technical solution for users facing too many food choices. They rely on embedding techniques to project images into a vectorial space where comparisons can be made. Which begs the question: can they do better than us humans on such ambiguous items as zucchinis and cucumbers?

To evaluate the differentiation capabilities of different image vectorization models on such items, we built a benchmark based on A Hierarchical Grocery Store Image Dataset with Visual and Semantic Labels.

In this talk, we present the process of building the benchmark, the results of our evaluation, and the lessons we learned from those to improve our Image Recommendation API. The audience will learn what makes a dataset suitable for evaluating a problem, how to pick the right metrics to evaluate, until when one should evaluate results, and some subtle considerations in how you should communicate benchmark results to other stakeholders in your team. They will discover several hurdles we encountered when turning this dataset into a benchmarking tool, which can be applied to many other datasets, giving you useful tips to evaluate your own recommender systems on other specific domains based on any available data!”

Download the Slides Watch the Video

Paul-Louis Nech


Paul-Louis Nech is a Machine Learning engineer with 10 years of experience crafting software for a global audience. Throughout his career, he has leveraged advanced algorithms to empower developers and users alike. From SwiftKey to Algolia, Paul-Louis has built tools that bridge the gap between state-of-the-art algorithms (which work in the lab) and end products (which succeed in the wild). His recent focus on artificial intelligence applications has yielded libraries for voice applications, APIs to analyze user intent, and image recommendation models.