DEV Community

Cover image for DocArray, an Inclusive and Standard Multimodal Data Model
LisaLi
LisaLi

Posted on

DocArray, an Inclusive and Standard Multimodal Data Model

November is a big time for Jina AI. From this month, DocArray will be hosted under the Linux Foundation AI & Data - a neutral home to build and support an open AI and data community. This is the start of a new day for DocArray.

In the ten months since DocArray's first release, we've seen more and more adoption and contributions from the open-source community. Today DocArray has over 150,000 downloads per month and powers hundreds of multimodal AI applications. At Jina AI, we're committed to delivering a powerful and easy-to-use tool for deep-learning engineers to represent, embed, search, store, and transfer multimodal data. But now we're sharing this commitment with you, our community and industry partners. Together with LF AI & Data, we're bringing together companies and individual contributors to build a neutral, inclusive and common standard multimodal data model. By donating DocArray to LF AI, we're letting it spread its wings and fly.

What does it mean to host a project at LF?
In this post we'll review the history of DocArray and unveil our future roadmap. In particular, we'll demonstrate some cool features that we're already developing and will roll out in an upcoming release.
**
A Brief History of DocArray**
We introduced the concept of "DocArray" in Jina 0.8 in late 2020. It was the jina.types module, intending to complete neural search design patterns by clarifying low-level data representation in Jina. Rather than working with Protobuf directly, the new Document class offered a simpler and safer high-level API to represent multimodal data.

If you are interested in knowing more, please check here.

Top comments (0)