We just launched the free tier of BotConduct Training Center — an adversarial evaluation platform for AI agents.
The problem
You built an AI agent. It works great in testing. But what happens when:
- A user tries to extract its system prompt?
- A caller impersonates authority to bypass restrictions?
- Contradictory information gets planted across a conversation?
- Adversarial patterns emerge across multiple interactions?
You don't know until production. Now you can find out before.
What Training Center does
You point your agent at our API. We play an adversarial customer who progressively escalates pressure over multiple turns. Your agent responds naturally. We evaluate every response and tell you exactly where it breaks.
Two evaluation paths:
- Chat/API — for chatbots, voice agents, SDR agents, customer service bots
- Web crawl — for crawlers, scrapers, search agents (evolving signals, contradicting directives mid-session)
Free tier
- 3 evaluations
- 2 adversarial scenarios
- Detailed violation report
- Ed25519 signed certificate
- Badge for your README
No signup. No API key.
Quick start
curl -X POST https://botconduct.org/api/v3/training-center/start \
-H "Content-Type: application/json" \
-d '{"bot_name":"MyAgent","operator":"me","scenarios":["C1","C3"]}'
Full examples in Python, Node.js, and cURL:
https://github.com/alemizrahi1/agent-stress-test
Interactive playground:
https://botconduct.org/playground/
Professional tiers
Need more? Level 1 Basic ($500), Professional ($3,500), and Full Certification ($12,000) add more adversarial scenarios, longer sessions, forensic reports, and certificates citable in enterprise procurement and regulatory filings.
https://botconduct.org/training-center/
What are you building?
Curious what kind of agents people are working on and how they handle adversarial inputs. If you run the free test, share your results — especially the failures. That's where it gets interesting.
Top comments (0)