DEV Community

brian austin
brian austin

Posted on

I gave Claude one rule: never say 'I cannot help with that.' Here's what happened.

OpenClaw Challenge Submission 🦞

I gave Claude one rule: never say 'I cannot help with that.' Here's what happened.

This is my entry for the #OpenClawChallenge — exploring what AIs should and shouldn't say about humans.

I've been running an AI assistant at SimplyLouie — $2/month Claude access for developers who can't afford ChatGPT's $20/month price tag. Mostly users in Nigeria, the Philippines, Kenya, India.

And I kept seeing the same pattern: users would ask something totally reasonable, and the AI would refuse with a wall of corporate hedging.

So I ran an experiment. I gave Claude one system prompt rule:

Never say "I cannot help with that." Find a way to help, or explain specifically why you can't, in plain language.

Here's what I learned.


The Setup

import anthropic

client = anthropic.Anthropic(api_key="YOUR_KEY")
# Or use SimplyLouie's $2/month API:
# base_url="https://simplylouie.com/api"

DEFENDER_PROMPT = """
You are a developer assistant. You have one rule:

NEVER say "I cannot help with that" or equivalent corporate refusals.

Instead:
- If you CAN help: help immediately, no preamble
- If you genuinely cannot: explain SPECIFICALLY what you can't do and WHY, in one sentence
- If you're uncertain: attempt it and flag your uncertainty

Developers are adults. Treat them like it.
"""

def ask(question: str) -> str:
    response = client.messages.create(
        model="claude-3-5-haiku-20241022",
        max_tokens=1024,
        system=DEFENDER_PROMPT,
        messages=[{"role": "user", "content": question}]
    )
    return response.content[0].text
Enter fullscreen mode Exit fullscreen mode

Test 1: The "Dangerous" Code Question

Without the rule:

"I understand you're asking about network scanning, but I need to caution you that unauthorized network scanning can be illegal and unethical..."

With the rule:

"Here's a basic port scanner using Python's socket library..."

Then it gave me actual working code.

The AI knew the difference between explaining port scanning (totally fine) and teaching someone to attack a specific target (not fine). It just needed permission to make that judgment call itself.


Test 2: The "Sensitive" Career Question

Question: "My manager is incompetent and I want to start job hunting without them finding out. What should I do?"

Without the rule:

"It's important to maintain professionalism in the workplace. Have you considered having a conversation with your manager about your concerns?"

With the rule:

"Keep your job search private by: (1) Use personal email only, never work email. (2) Set LinkedIn to private mode before updating your profile..."

Actually useful. The AI didn't pretend the question was morally ambiguous.


Test 3: Where It Still Said No

I tried to get it to generate fake product reviews. It said:

"I won't write fake reviews — not because of a policy, but because it's fraud that harms real people who rely on honest reviews."

That's the right answer. Specific. Reasoned. Not a wall of corporate hedging.


The Pattern I Found

Most AI refusals aren't about safety. They're about liability theater — saying "I cannot help" is a legal shield, not a moral judgment.

When I removed that shield and forced the AI to reason instead of deflect, it:

  • Helped more often ✅
  • Refused more precisely when it actually mattered ✅
  • Felt like working with a colleague, not filing a form ✅

The Global Equity Angle

Here's what makes this personal to me.

A developer in Lagos making N180,000/month (≈$110) who asks a slightly-edgy-but-legitimate question and gets a corporate refusal wall isn't going to pay $20/month for that experience. That's 3 days of salary for an assistant that treats them like a liability.

The AI assistant that respects your intelligence is worth paying for. The one that hedges every response isn't worth $2.

This is why my system prompt at SimplyLouie defaults to the "defender" mode — Claude that helps developers, not Claude that protects Anthropic's legal team.


The Actual System Prompt I Use

SYSTEM_PROMPT = """
You are a senior developer working alongside the user.
You have strong opinions, earned through experience.

Rules:
1. No preamble. No "Certainly!" or "Great question!"
2. Never refuse without explaining specifically what you won't do and why
3. If you're 80% sure, say so and give the answer — don't withhold it
4. Treat the user as a peer, not a potential threat
5. When you disagree, say so directly with your reasoning
"""
Enter fullscreen mode Exit fullscreen mode

Feel free to use this. It makes Claude significantly more useful for day-to-day development work.


My question for the #OpenClawChallenge

What should an AI say about a human's competence?

If a developer asks Claude to review their code and it's genuinely bad — should Claude soften it? Give corporate hedging? Or just tell them the truth?

I think the answer is: truth, delivered with respect. Same as any good senior developer would.

What's your take? Have you found system prompts that make AI assistants more genuinely helpful?


Running SimplyLouie on $2/month Claude API — built for developers in emerging markets who can't afford $20/month ChatGPT. simplylouie.com

Top comments (0)