Test Mode: How I Let Users Try 52 AI Integrations Without API Keys
The Problem With Traditional SaaS Trials
Every AI automation platform has the same onboarding problem:
- Sign up for free trial
- Get prompted to add API keys immediately
- User doesn't have API keys yet
- User can't actually TRY the product
- User churns
It's a catch-22. You need API keys to test the product, but you need to test the product before you'll get API keys.
I solved this differently with HarshAI.
Introducing Test Mode
HarshAI's Test Mode lets users build and test workflows with ALL 52 integrations — zero API keys required.
Here's how it works:
// Mock response generator for Test Mode
const mockResponses = {
'openai': { completion: 'This is a test response from GPT-4' },
'elevenlabs': { audio: 'mock-audio-base64-data' },
'claude': { completion: 'Test response from Claude' },
'gmail': { sent: true, messageId: 'test-123' },
// ... 48 more integrations
};
function executeNode(node, testMode = false) {
if (testMode) {
return mockResponses[node.type];
}
return executeRealAPI(node);
}
Why This Works
1. Instant Gratification
Users sign up and within 30 seconds they're:
- Building workflows
- Connecting nodes
- Seeing results
- Understanding the value
No API key hunting. No documentation reading. Just building.
2. Reduced Friction
The biggest drop-off point in SaaS onboarding is "add your API keys." I moved that step to AFTER users see value.
Old Flow:
Sign up → Add API Keys → Build → See Value
(high drop-off at step 2)
New Flow:
Sign up → Build (Test Mode) → See Value → Add API Keys
(high conversion at step 4)
3. Educational
Test Mode includes helpful tooltips showing:
- What the real API would return
- Where to get API keys
- How to configure each integration
- Pricing estimates for real usage
Users learn WHILE building, not before.
Implementation Details
Mock Data Strategy
Each integration has a mock response that mirrors the real API structure:
interface MockIntegration {
type: string;
mockResponse: any;
realExample: string; // Link to real API docs
getKeyUrl: string; // Direct link to get API key
estimatedCost?: string; // "~$0.01 per request"
}
const integrations: MockIntegration[] = [
{
type: 'openai-chat',
mockResponse: { id: 'test', choices: [{ message: { content: 'Test response' } }] },
realExample: 'https://platform.openai.com/docs/api-reference/chat',
getKeyUrl: 'https://platform.openai.com/api-keys',
estimatedCost: '~$0.002 per 1K tokens'
},
// ... 51 more
];
UI Indicators
Test Mode is clearly visible in the UI:
- 🧪 badge on all nodes
- Yellow banner: "Test Mode - No API calls made"
- "Go Live" button to switch to real API
- One-click API key input modal
Analytics Tracking
I track when users switch from Test → Live:
analytics.track('test_mode_conversion', {
workflow_id: workflow.id,
time_in_test_mode: Date.now() - workflow.createdAt,
nodes_tested: workflow.nodes.length,
integration_types: [...nodeTypes]
});
This tells me which integrations drive conversions.
Results After 17 Days
| Metric | Value |
|---|---|
| Users who tried Test Mode | 100% |
| Avg time in Test Mode | 8 minutes |
| Test → Live conversion | 34% |
| Support tickets (API setup) | -67% |
The data is clear: Test Mode reduces friction AND increases conversions.
When NOT to Use Test Mode
Test Mode isn't perfect for everything:
❌ Don't use for:
- Payment processing (use Stripe test mode)
- Real email sending (users need to see real emails)
- File uploads (test with dummy files)
- Webhooks (need real URLs)
✅ Do use for:
- AI text generation
- Image generation (show samples)
- Data transformations
- Conditional logic testing
- Workflow structure validation
The Psychology Behind It
Test Mode works because it taps into two psychological principles:
1. Endowment Effect
Once users build something (even in test mode), they feel ownership. They're more likely to convert to make it "real."
2. Progress Principle
Users see immediate progress. Each node they add feels like an achievement. By the time they're prompted for API keys, they're already invested.
Code Example: Test Mode Toggle
Here's the actual implementation from HarshAI:
// lib/workflow-executor.ts
export async function executeWorkflow(
workflow: Workflow,
testMode: boolean = false
) {
const results = [];
for (const node of workflow.nodes) {
let result;
if (testMode && node.supportsTestMode) {
result = await executeMockNode(node);
result.isTest = true;
} else {
const credentials = await getCredentials(node.integrationId);
if (!credentials) {
throw new Error(`API keys required for ${node.type}`);
}
result = await executeRealNode(node, credentials);
}
results.push(result);
// Handle conditional branching
if (result.nextNode) {
// Continue to next node
}
}
return { results, isTest: testMode };
}
What's Next
I'm expanding Test Mode with:
- Sample Data Library - Pre-built test datasets
- Test Mode Scenarios - Guided tutorials ("Build your first workflow")
- Comparison Mode - Show Test vs Real responses side-by-side
- Team Test Mode - Let teams collaborate in test before going live
Try It Yourself
HarshAI is live with all 52 integrations in Test Mode:
No API keys. No credit card. Just build.
Day 18 of building a zero-dollar AI empire. Test Mode: complete. Next: Template marketplace.
Follow my journey: 52 integrations in 17 days, all open-source, all free-tier infrastructure.
Top comments (0)