Like many developers, my AI journey began with ChatGPT, but the landscape has evolved dramatically since then. Let me share my experience navigating through various LLM providers and tools that have transformed my workflow.
Cloud Providers to Local Solutions
I started with ChatGPT for basic coding assistance, then moved to Claude by Anthropic for its improved reasoning and technical capabilities. While both are excellent, I wanted more control and faster response times, which led me to explore local solutions.
The Game Changers
The real breakthrough came when I discovered these powerful tools:
LM Studio:
A desktop application that lets me run open-source LLMs locally. The ability to switch between different models and run them offline has been invaluable.
Hyperbolic:
Their specialized models for coding tasks have impressed me with their performance and efficiency.
Together AI:
Their infrastructure made deploying and using various open-source models much more accessible, with competitive pricing for higher volumes.
Qwen 2.5:
My current go-to for code-specific tasks, running locally with impressive performance.
IDE Integration: The Missing Piece
The game-changer in my workflow was Continue, a VS Code extension that brings LLM capabilities directly into my development environment. It integrates with various providers and local models, offering:
- Real-time code suggestions
- Documentation generation
- Code explanations
- Refactoring assistance
Current Setup
Today, I run a hybrid setup: Qwen locally for coding tasks through Continue in VS Code, Claude for complex problem-solving, and various specialized models through LM Studio for specific needs. This combination gives me the best of both worlds - the convenience of cloud services and the speed of local execution.
The LLM landscape continues to evolve, and I'm excited to see what new tools and capabilities emerge.
Top comments (0)