I built a fun side project ? a Go static analyzer that finds code smells. Then I made it work with Claude Code via MCP (Model Context Protocol). Now AI can roast your Go code automatically.
Repo: github.com/aqylsoft/godepvis
The Problem I Didn't Know I Had
You know that moment when you're reviewing a PR and you see a 200-line function? Or a method with 9 parameters? Or someone using context.Background() right next to a perfectly good ctx?
These things aren't bugs. They compile fine. Tests pass. But they make you go "hmm" and reach for the comment button.
I wanted a tool that catches these patterns automatically. Not to replace code review ? but to handle the boring "hey, this function is too long" comments so humans can focus on actual logic.
Golangci-lint is great, but configuring it to catch architectural smells requires yoga-level flexibility. I wanted something opinionated and simple.
So I built godepvis.
What godepvis Actually Detects
I call them "sins" because "code smells" sounds too polite. Here's what the tool catches:
The Heavy Hitters
huge_function ? Functions over 50 lines. Yes, 50 is arbitrary. No, I won't debate it.
// Sin detected at user_service.go:45
func (s *UserService) CreateUser(ctx context.Context, req CreateUserRequest) (*User, error) {
// ... 87 lines of validation, database calls,
// event publishing, logging, and existential dread
}
huge_main ? The classic "I'll refactor it later" main.go that somehow handles config, DI, server setup, graceful shutdown, and your emotional baggage.
too_many_params ? Functions with 6+ parameters. Usually a sign you need a struct.
// Sin: 8 parameters. Your function is not a phone number.
func processOrder(ctx context.Context, userID, orderID, productID string,
quantity int, discount float64, notify bool, logger *slog.Logger) error
The Subtle Ones
context_misuse ? Using context.Background() when you already have ctx in scope. This one's sneaky because it works, but breaks cancellation and tracing.
func (h *Handler) Handle(ctx context.Context, event Event) error {
// ...
result, err := h.client.Call(context.Background()) // Sin! Use ctx
// ...
}
no_tests ? Packages without _test.go files. Sometimes intentional, often forgotten.
todo_comments ? Those "I'll fix it tomorrow" comments from 2019.
The MCP Adventure
Here's where the weekend project became interesting.
What is MCP?
Model Context Protocol is Anthropic's way of letting AI assistants use external tools. Instead of the AI guessing or asking you to run commands, it can just... call your tool directly.
Think of it as giving Claude hands to use your CLI.
The Implementation
MCP is surprisingly simple. It's JSON-RPC over stdio. Your tool:
- Receives a JSON request
- Does the thing
- Returns a JSON response
Here's the core of my MCP handler:
func (s *MCPServer) handleToolCall(name string, args map[string]any) (any, error) {
switch name {
case "analyze_code":
path, _ := args["path"].(string)
maxIssues, _ := args["max_issues"].(string)
return s.analyzeCode(path, maxIssues)
case "list_sin_types":
return s.listSinTypes()
default:
return nil, fmt.Errorf("unknown tool: %s", name)
}
}
The protocol handles capability negotiation, tool discovery, and error formatting. You just implement your logic.
What Claude Sees
When you add godepvis to Claude Code's MCP config, it gets two new tools:
-
analyze_code? Run the analyzer on a path -
list_sin_types? Show what sins we can detect
Now conversations like this work:
User: analyze my Go code for smells
Claude: I'll run godepvis on your project.
*calls analyze_code tool*
Found 20 issues in 519 files:
? huge_main `cmd/consumer/main.go:42`
main() is 164 lines ? consider extracting setup logic
? context_misuse `handler.go:120`
Using context.Background() but ctx is available in scope
? too_many_params `main.go:157`
runWorkerLoop has 8 parameters ? consider a config struct
Want me to help refactor any of these?
The AI doesn't just report problems ? it can actually suggest fixes because it knows the exact location and context.
Lessons Learned (The Hard Way)
1. Keep Responses Concise
My first version returned every detail about every sin. Claude's context window filled up fast on large codebases. Now I return summaries with file:line references.
2. Validate Inputs Religiously
AI will send weird stuff. Empty strings. Null values. Paths that don't exist. The number 42 as a string when you expected an int.
maxIssues := 20 // default
if v, ok := args["max_issues"].(string); ok && v != "" {
if parsed, err := strconv.Atoi(v); err == nil {
maxIssues = parsed
}
}
3. Errors Should Be Helpful
When something fails, the AI will show your error to the user or try to fix it. Make errors actionable:
// Bad
return nil, errors.New("failed")
// Good
return nil, fmt.Errorf("path %q does not exist or is not a Go module", path)
4. Test With Real AI Usage
Unit tests are great. But run your tool through Claude a few times. You'll find edge cases you never imagined.
The Architecture (Such As It Is)
godepvis/
??? cmd/godepvis/ # CLI entry point
??? internal/
? ??? analyzer/ # The actual code analysis
? ??? mcp/ # MCP server implementation
? ??? sins/ # Sin definitions and detection
??? go.mod
The analyzer walks Go AST, the sins package defines what to look for, and the MCP server wraps it for AI consumption.
Nothing fancy. ~1500 lines of Go. The kind of codebase you can read in an afternoon.
Try It Yourself
CLI Mode
go install github.com/aqylsoft/godepvis@latest
# Analyze current directory
godepvis analyze ./...
# Analyze specific path
godepvis analyze /path/to/your/project
With Claude Code
Add to ~/.claude/mcp.json:
{
"mcpServers": {
"godepvis": {
"command": "godepvis",
"args": ["mcp"]
}
}
}
Restart Claude Code. Now you can ask it to analyze your Go code.
Is This Production Ready?
Let me be real: this is a weekend project.
The detections work. The MCP integration is solid. But I built this to learn, not to replace your existing tooling.
That said:
- It found real issues in my work codebase
- The MCP pattern is reusable for other tools
- It's fun to watch AI roast your code
What's Next?
Ideas I might implement (PRs welcome):
- Severity levels ? Not all sins are equal
- Auto-fix suggestions ? Generate the refactored code
- More sins ? Naked returns, error shadowing, import cycles
- Ignore patterns ? Sometimes long functions are okay
- VS Code extension ? Because why not
Wrapping Up
MCP is the real discovery here. The protocol is simple, the integration is smooth, and suddenly your CLI tools become AI capabilities.
godepvis is just one example. Imagine:
- Database migration tools that AI can run
- Infrastructure linters AI can invoke
- Custom validators for your domain
The barrier between "tool I use" and "tool AI uses" just got a lot lower.
What code smells annoy you the most? I'm looking for new sins to detect. Drop ideas in the comments.
First time writing about AI tooling. First time publishing a Go analyzer. First time implementing MCP. Lots of firsts. Be gentle, but honest.
Top comments (0)