DEV Community

Cover image for DataVyn Labs Ollama Agents multi-model AI chat workspace
Ansh kunwar
Ansh kunwar

Posted on

DataVyn Labs Ollama Agents multi-model AI chat workspace

We built a clean, minimal AI chat interface powered by Ollama Cloud Models, designed as a fast workspace for trying multiple frontier LLMs in one place.​

You only need a single Ollama Cloud API key to chat with 19+ top models—no separate OpenAI/Gemini keys, no billing setup, or no credit card needed.​

This project is developed by DataVyn Labs DataVyn Labs · Github

What it does

  • Talk to 19+ Ollama cloud models (OpenAI, DeepSeek, Qwen, Gemini, Mistral, Kimi, GLM, MiniMax, and more) from a single UI.​
  • Upload .txt, .pdf, .json, .py, .csv and send the content to the model.​
  • Voice input via mic with automatic transcription.​
  • Secure API key handling (session-only, never saved to disk).​
  • Dark, Claude-style interface built entirely with Streamlit.​

deployed app
App: https://datavyn-labs-x-ollama-agents.streamlit.app/​

github repo - anshk1234/DataVyn-Labs-X-Ollama-agents

You just need an Ollama Cloud API key (Settings → API Keys on ollama.com) and you’re ready to go.

Top comments (1)

Collapse
 
dev_ops_python profile image
Ansh kunwar

If you find this useful, please consider ✨ starring the GitHub repo – it really helps motivate us at DataVyn Labs to keep building and experimenting with more AI tools.

github repo - anshk1234/DataVyn-Labs-X-Ollama-ag...