Maximizing Multimodal: Exploring the search frontier of text-image models to improve visual find-ability for creatives

Nathan Day • Location: Theater 5 • Back to Haystack 2025

“Objective: Describe where and how we have improved the search experience in our product with open source multi-modal models and libraries. Real world examples from the things we have shipped (and decided not to ship) to production, including AB test results of our relevancy changes.

Outline:

  1. Cover the architecture of our open source hybrid search stack at Eezy (Elasticsearch, FAISS, PyTorch models)
  2. Describe the capabilities and limitations of openCLIP (and vector embeddings at large) for retrieval tasks and current pain points and work arounds we have engineered.
  3. Highlight meaningful stops on our product roadmap from the last 2 years of deploying features into production.
  4. Describe notable missteps and surprises uncovered along the way, so people see it’s not all roses in the AI powered future.
  5. Demo BORGES, a novel search framework that allows users to search with multiple queries in multiple modalities for a nuanced navigation of the catalog to find exactly what they need

Audience:

Nathan Day

Eezy

Nate Day is a Data Scientist at Vecteezy/Eezy, a creative stock company, where he champions Search and Statistics. Before Eezy, Nate’s search life began at OSC, where he led a TREC track winning team, pushed forward test driven development and delivered data science solutions to search problems. In his life before search, Nate has worked in plant genetics, translation biology, and clinical drug development, as well as at an ice rink, golf course and gas station Subway. Nate is a proud Virginian, living in Charlottesville for over a decade now, absolutely loving the `(food + fun) / size` factor this wonderful locale has to offer. AMA about maximizing your time here, either before or after the talk.