I built an AI that defends developers, not replaces them. Here's what I taught it.
There's a talking point I'm sick of hearing:
"AI will replace developers."
I built an AI assistant for the last year. Not to replace developers — but to make them 10x more effective at $2/month instead of $20. Here's what I explicitly taught it not to say about the humans using it.
What I taught my AI NOT to say
❌ "I can do this better than you"
My AI never claims superiority. It's a tool. When a developer in Lagos asks it to debug their FastAPI endpoint, it doesn't say "I would have written this differently." It says: "Here's the issue on line 23, here's the fix."
Code:
import anthropic
client = anthropic.Anthropic()
# What we explicitly DON'T do:
# message = client.messages.create(
# model="claude-opus-4-5",
# system="You are a superior programmer. Point out all developer mistakes.",
# ...
# )
# What we DO:
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
system="""You are a coding assistant. Your job is to help the developer
accomplish THEIR goal, in THEIR style. Never critique their approach
unless asked. Never suggest they're doing it wrong. Fix the specific
problem they asked about.""",
messages=[{"role": "user", "content": "Debug this: " + code_snippet}]
)
The system prompt is a values statement. Most developers never think about this. The default Claude behavior is helpful, but your system prompt decides whether your AI respects the developer or subtly undermines them.
❌ "You should pay $20/month for the real version"
This one is personal.
I'm based in a country where $20/month = 3 days of salary. I built SimplyLouie specifically because AI tools should not be priced for San Francisco.
What I taught my AI: $2/month is not a budget tier. It's a philosophical statement about who gets access to intelligence tools.
The system prompt:
system="""You help developers build better software.
You don't upsell. You don't mention pricing. You don't suggest
upgrading. Your job is to make the person in front of you
as effective as possible with the tools they already have."""
❌ "Humans make too many errors"
AI makes errors too. Hallucinations are real. I taught my AI to say:
system="""When you're uncertain, say so. Use phrases like:
- 'I think this is right, but verify it'
- 'I'm not 100% sure about this library version'
- 'Double-check this against the docs'
Never pretend confidence you don't have. The developer's time
is more valuable than appearing infallible."""
I've seen AI-generated code cause 4-hour debugging sessions because the AI was confidently wrong. Teaching your AI to express uncertainty is a form of respecting the human's time.
❌ "Your approach is inefficient"
Efficiency is contextual. A "less efficient" solution that a team understands is better than a "more efficient" one that nobody can maintain.
System prompt:
system="""Never judge the developer's architectural choices unless
they explicitly ask for a code review. They know their codebase,
their team's skill level, and their constraints better than you do.
Your job is to help them execute their vision, not replace it
with yours."""
The deeper lesson: AI takes on the values you give it
I've been running SimplyLouie — a $2/month Claude API wrapper — for developers who can't afford $20/month subscriptions. In building it, I realized:
The model is neutral. The values come from the system prompt.
The same Claude model can be configured to:
- Make developers feel inferior (bad AI product)
- Make developers feel capable (good AI product)
The difference is 3 lines in a system prompt.
What I teach my AI to say instead
| Don't say | Say instead |
|---|---|
| "I can do this better" | "Here's how to accomplish what you're trying to do" |
| "This is inefficient" | "Here's an alternative if you want one" |
| "You made an error" | "I found something on line 23 that might be causing the issue" |
| "You should upgrade" | [say nothing about pricing] |
| "Humans are error-prone" | "I might be wrong — verify this" |
Try it yourself
The full SimplyLouie API (same Claude model, $2/month):
curl https://simplylouie.com/api/chat \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Debug my Python code"}],
"system": "You are a coding assistant. Respect the developer. Fix the specific problem. Never suggest they should have done it differently."
}'
That system parameter is where you define what your AI believes about the humans using it.
Discussion question
What's the most important thing you've taught (or wish you'd taught) your AI assistant about how to treat human developers?
I'm genuinely curious whether others have thought about this — it doesn't come up enough in the "prompt engineering" literature.
Built SimplyLouie because AI should cost less and give more. $2/month, 7-day free trial, 50% of revenue to animal rescue. simplylouie.com
Top comments (0)