DEV Community

Cover image for I Curated 120+ Agentic Development Tools So You Don't Have To
Denis
Denis

Posted on • Originally published at github.com

I Curated 120+ Agentic Development Tools So You Don't Have To

AI coding agents went from "nice autocomplete" to "rewrite my entire module" in under a year. The ecosystem is exploding — new IDEs, terminal agents, MCP servers, frameworks, and evaluation tools ship every week.

I spent weeks mapping this landscape and organized everything into one curated list:

🔗 Awesome Agentic Development

Here's what's inside and why it matters.


The Problem

If you search for "AI coding tools" you'll find dozens of scattered lists, each covering a narrow slice:

  • MCP server lists that ignore IDEs and frameworks
  • IDE comparisons that skip terminal agents
  • Framework lists that don't mention security or testing

No single resource covers the full stack of agentic development — from the editor you write prompts in, to the observability platform tracking your agent's token spend.

That's the gap this list fills.

15 Categories, One List

The list is organized into 15 sections with 120+ curated entries, each with a one-line description and pricing tag (Free / Freemium / Paid / Enterprise / OSS).

🖥️ AI-Enhanced IDEs & Editors

The big players: Cursor, Windsurf, Zed, Kiro, Trae, and more. These aren't just editors with chat — they run multi-step agentic workflows across your codebase.

⌨️ Terminal-Based Coding Agents

For those who live in the terminal: Claude Code, Aider, Gemini CLI, Goose, OpenAI Codex CLI, and others. These tools reason over your entire repo and commit changes directly.

🧩 VS Code Extensions

Cline, Roo Code, GitHub Copilot, Continue, KiloCode, Cody — the extensions turning VS Code into an agentic workspace.

🤖 Agent Frameworks

Split into multi-agent orchestration (AutoGen, CrewAI, LangGraph, Google ADK, Mastra) and lightweight/specialized (PydanticAI, smolagents, Agno). Whether you're building a research pipeline or a simple tool-calling agent, there's a framework here.

🔌 MCP Ecosystem

The Model Context Protocol is becoming the USB-C of AI agents. The list covers:

  • Directories — where to find MCP servers
  • Official servers — GitHub, Docker, Brave Search, Filesystem, Git, Slack, PostgreSQL
  • Clients — Claude Desktop, VS Code, Cursor, Goose

🧠 Context Engineering

This is the section most lists miss entirely. Standards like CLAUDE.md, AGENTS.md, GEMINI.md, .cursorrules, and llms.txt define how agents understand your project. I also included guides from Martin Fowler and GitHub's analysis of 2,500+ AGENTS.md files.

🏠 Local Models & Self-Hosted

Not everything needs an API call. This section covers inference engines (Ollama, vLLM, llama.cpp, LM Studio) and coding-focused models (Qwen3-Coder, Codestral, Devstral, DeepSeek-Coder-V2, StarCoder2).

🎨 Vibe Coding Platforms

The "describe it and ship it" tools: Bolt.new, Lovable, v0, Firebase Studio, Replit Agent, Devin. Great for prototyping, increasingly viable for production.

🔒 Agent Security

Agents executing arbitrary code is a security nightmare nobody talks about enough. This section covers:

  • OWASP Top 10 for Agentic Applications 2026
  • Sandboxing — E2B, Firecracker, gVisor
  • Research — prompt injection to RCE, MCP server security audits

🔧 Agent DevOps & Automation

AI-powered code review (CodeRabbit, CodeAnt AI, Graphite, Qodo) and CI/CD tools like the Claude Code GitHub Action.

📊 Observability & Tracing

Your agent made 47 LLM calls and spent $2.30 on a single task. Now what? Langfuse, LangSmith, Arize Phoenix, Helicone, Portkey, OpenLIT, and W&B Weave help you trace, debug, and optimize.

✅ Testing & Quality

How do you test non-deterministic AI outputs? Tools like promptfoo, DeepEval, Inspect AI, Ragas, and Evalite provide frameworks for evaluating agents systematically.

📏 Benchmarks & Evaluation

SWE-bench, HumanEval, and Aider Polyglot — the standard benchmarks for measuring coding agent performance.

📚 Learning Resources

Anthropic's "Building Effective Agents", DeepLearning.AI courses, LangChain Academy, and communities on Reddit (r/AI_Agents, r/ChatGPTCoding, r/ClaudeCode, r/vibecoding).


Bonus Features

Beyond the main list, the repo includes:

Feature Description
LLM Provider Matrix Which tools work with which models (25 tools × 7 providers)
Star Tracker Live GitHub star badges for every OSS entry
CI/CD awesome-lint + weekly link checking with lychee
Issue Templates Report broken links or nominate new tools
GitHub Discussions Community nominations channel

How It's Different

I looked at every similar list on GitHub before building this. Here's the gap analysis:

Existing List Categories Covered
awesome-ai-agents (25k+ ⭐) 1 — Frameworks only
awesome-mcp-servers (35k+ ⭐) 1 — MCP only
awesome-vibe-coding (3k+ ⭐) 4 — IDEs, Terminal, Vibe, Learning
awesome-code-ai (2k+ ⭐) 1 — IDEs/Assistants only
awesome-agentic-development 15 — Full stack coverage

Nobody else covers agent security, context engineering, observability, testing, or LLM provider compatibility in a single resource.

Contributing

The list is CC0-licensed and open for contributions:


If this is useful, star the repo — it helps others find it.

What tools are you using for agentic development? Anything missing from the list? Let me know in the comments 👇

Top comments (0)