Very cool. I spent a few minutes wrapping my head around the Embedding model and specifically asking myself when I would personally deploy it. Sharing in case it's helpful for anyone else.
Core Value Proposition: The utility isn't just in making text search slightly better; it's about seamlessly linking concepts across entirely different mediums (written, video, audio, etc) without having to translate them all into text first.
Example use cases:
Better RAG for my AI board game coach project. Currently preprocessing PDFs into text for search. This would allow ingestion of the raw PDFs and enable search across embedded photos and text natively.
Hackathon project data is split across written submissions, video demos, and code. Using this model would enable us to natively search across all mediums simultaneously
Would love to see more concrete use case examples in future posts to really crystalize the power of the models!
Very cool. I spent a few minutes wrapping my head around the Embedding model and specifically asking myself when I would personally deploy it. Sharing in case it's helpful for anyone else.
Core Value Proposition: The utility isn't just in making text search slightly better; it's about seamlessly linking concepts across entirely different mediums (written, video, audio, etc) without having to translate them all into text first.
Example use cases:
Would love to see more concrete use case examples in future posts to really crystalize the power of the models!
Google's own @thorwebdev posted a great, illustrative example today for anyone else's future reference: dev.to/googleai/a-guide-to-embeddi...