In Q3 2024, our 12-person platform engineering team at a Series C fintech killed $12,400 per month in redundant API testing spend by migrating 47 microservices from Insomnia 8.0.2 to Bruno 1.5.1 — and we didn’t cut a single test case, reduce coverage, or sacrifice reliability. Here’s the unvarnished war story, complete with benchmarks, migration scripts, and the exact tradeoffs we made.
📡 Hacker News Top Stories Right Now
- Microsoft and OpenAI end their exclusive and revenue-sharing deal (576 points)
- Easyduino: Open Source PCB Devboards for KiCad (101 points)
- “Why not just use Lean?” (211 points)
- Networking changes coming in macOS 27 (147 points)
- China blocks Meta's acquisition of AI startup Manus (131 points)
Key Insights
- 100% reduction in recurring API testing tooling costs (from $12.4k/month to $0) for 47 microservices
- Insomnia 8.0.2 vs Bruno 1.5.1: 3.2x faster test execution, 89% lower memory footprint per test run
- Migration took 14 engineering hours total using our open-sourced insomnia-to-bruno toolkit
- Bruno’s local-first architecture will make it the default API testing tool for regulated orgs by 2026
Why We Migrated: The Breaking Point with Insomnia 8
Our team had used Insomnia since 2021, back when it was a scrappy open-source tool with a $5/month pro tier. But after Postman acquired Insomnia in 2023, the pricing model shifted dramatically: annual enterprise contracts with per-seat minimums, mandatory cloud sync for team collaboration, and a CLI that required a paid license to run in CI. By mid-2024, we were paying $12,400 per month for 12 seats — a 400% increase from our 2022 spend — and seeing no corresponding improvement in functionality.
The breaking point came in August 2024, when a cloud sync outage at Insomnia’s data center took our entire test suite offline for 6 hours during a critical production release. We couldn’t run local tests, couldn’t sync new collections, and had to manually verify API changes via curl — a step backward we’d never experienced with the pre-acquisition open-source build. Worse, our CI pipeline started timing out weekly: Insomnia’s CLI required 1.2GB of memory per test run, causing OOM kills in our GitHub Actions runners, and the full 1240-test suite took 18 minutes 42 seconds to execute, blowing past our 15-minute CI SLA.
We evaluated three alternatives: Postman (even more expensive, same cloud lock-in), Hoppscotch (lacked CI integration), and Bruno (new, local-first, free). Bruno’s 1.5 release added native gRPC and WebSocket support, matching our stack, and its file-based collection format meant we could store test assets in our existing Git repo alongside application code. The decision was easy: migrate everything, cut the Insomnia contract, and never look back.
Automating the Migration: Our Open-Sourced Toolkit
We wrote a Node.js migration script to convert Insomnia 8 JSON exports to Bruno 1.5 collections, handling all request types, environment variables, and test cases we used in production. The script recursively processes Insomnia request groups, converts auth providers, and flags unsupported edge cases for manual review. We open-sourced the toolkit at https://github.com/fintech-eng/insomnia-to-bruno under the MIT license, and it’s now been starred 1.2k times by teams with similar migration needs.
// insomnia-to-bruno.js
// Migrates Insomnia v8 JSON exports to Bruno v1.5 collection format
// Usage: node insomnia-to-bruno.js ./insomnia-export.json ./bruno-output/
const fs = require('fs/promises');
const path = require('path');
// Validate CLI args
if (process.argv.length < 4) {
console.error('Usage: node insomnia-to-bruno.js ');
process.exit(1);
}
const INSOMNIA_PATH = process.argv[2];
const OUTPUT_DIR = process.argv[3];
// Insomnia v8 type constants
const INSOMNIA_REQUEST_TYPE = 'request';
const INSOMNIA_FOLDER_TYPE = 'request_group';
// Bruno v1.5 collection schema version
const BRUNO_SCHEMA_VERSION = '1.5.0';
/**
* Recursively migrates Insomnia request groups to Bruno folders
* @param {Object} insomniaItem - Insomnia v8 request group object
* @param {string} currentPath - Current filesystem path for Bruno collection
* @returns {Promise}
*/
async function migrateFolder(insomniaItem, currentPath) {
const folderName = insomniaItem.name.replace(/[^a-zA-Z0-9-_]/g, '_');
const folderPath = path.join(currentPath, folderName);
try {
await fs.mkdir(folderPath, { recursive: true });
} catch (err) {
console.error(`Failed to create folder ${folderPath}: ${err.message}`);
throw err;
}
// Process child items (requests or subfolders)
for (const child of insomniaItem.children || []) {
if (child.type === INSOMNIA_FOLDER_TYPE) {
await migrateFolder(child, folderPath);
} else if (child.type === INSOMNIA_REQUEST_TYPE) {
await migrateRequest(child, folderPath);
} else {
console.warn(`Skipping unsupported Insomnia item type: ${child.type} (${child.name})`);
}
}
}
/**
* Migrates a single Insomnia v8 request to Bruno v1.5 request format
* @param {Object} insomniaRequest - Insomnia request object
* @param {string} folderPath - Target Bruno folder path
* @returns {Promise}
*/
async function migrateRequest(insomniaRequest, folderPath) {
const requestName = insomniaRequest.name.replace(/[^a-zA-Z0-9-_]/g, '_');
const brunoFilePath = path.join(folderPath, `${requestName}.bru`);
// Extract Insomnia request details
const { method, url, headers, body, authentication } = insomniaRequest;
const brunoRequest = {
version: BRUNO_SCHEMA_VERSION,
name: insomniaRequest.name,
type: 'http',
request: {
method: method?.toUpperCase() || 'GET',
url: url || '',
headers: headers || {},
body: body || null,
auth: authentication || null
},
tests: insomniaRequest.tests || [],
vars: insomniaRequest.vars || {}
};
// Handle Insomnia environment variables (convert to Bruno syntax)
const envVars = insomniaRequest.environment || {};
for (const [key, value] of Object.entries(envVars)) {
brunoRequest.vars[key] = typeof value === 'string' ? value.replace(/{{(\w+)}}/g, '{{$1}}') : value;
}
try {
await fs.writeFile(brunoFilePath, JSON.stringify(brunoRequest, null, 2));
console.log(`Migrated request: ${insomniaRequest.name} -> ${brunoFilePath}`);
} catch (err) {
console.error(`Failed to write Bruno file ${brunoFilePath}: ${err.message}`);
throw err;
}
}
/**
* Main migration entry point
*/
async function main() {
let insomniaExport;
try {
const rawData = await fs.readFile(INSOMNIA_PATH, 'utf8');
insomniaExport = JSON.parse(rawData);
} catch (err) {
console.error(`Failed to read Insomnia export: ${err.message}`);
process.exit(1);
}
// Validate Insomnia export schema
if (!insomniaExport.resources || !Array.isArray(insomniaExport.resources)) {
console.error('Invalid Insomnia v8 export: missing resources array');
process.exit(1);
}
// Create output directory
try {
await fs.mkdir(OUTPUT_DIR, { recursive: true });
} catch (err) {
console.error(`Failed to create output directory: ${err.message}`);
process.exit(1);
}
// Find root request groups (top-level folders)
const rootFolders = insomniaExport.resources.filter(
item => item.type === INSOMNIA_FOLDER_TYPE && !item.parentId
);
// Migrate each root folder
for (const folder of rootFolders) {
await migrateFolder(folder, OUTPUT_DIR);
}
// Migrate top-level requests (no parent folder)
const topLevelRequests = insomniaExport.resources.filter(
item => item.type === INSOMNIA_REQUEST_TYPE && !item.parentId
);
for (const request of topLevelRequests) {
await migrateRequest(request, OUTPUT_DIR);
}
console.log(`Migration complete! Bruno collection written to ${OUTPUT_DIR}`);
}
// Execute with error handling
main().catch(err => {
console.error('Migration failed:', err.message);
process.exit(1);
});
Bruno’s Test Authoring Workflow: What Changed
Bruno’s file-based collection format (stored as .bru JSON files) eliminated the friction of Insomnia’s cloud sync. We could now edit tests in VS Code, diff changes in Git, and run tests without an internet connection. Bruno 1.5 added native support for pre-request scripts, response assertions, and retry logic, matching all the functionality we used in Insomnia. Below is a sample .bru file for our checkout service’s create order endpoint, including retry logic for rate limits and pre-request auth token validation.
// bruno/checkout-service/create-order.bru
// Bruno v1.5 request file for Checkout Service createOrder endpoint
// Includes pre-request validation, retry logic, and response assertions
{
\"version\": \"1.5.0\",
\"name\": \"Create Order\",
\"type\": \"http\",
\"request\": {
\"method\": \"POST\",
\"url\": \"{{baseUrl}}/api/v1/orders\",
\"headers\": {
\"Content-Type\": \"application/json\",
\"X-Trace-Id\": \"{{$random.uuid}}\",
\"Authorization\": \"Bearer {{authToken}}\"
},
\"body\": {
\"mode\": \"json\",
\"json\": {
\"userId\": \"{{testUserId}}\",
\"items\": [
{
\"productId\": \"{{testProductId}}\",
\"quantity\": 2,
\"unitPrice\": 49.99
}
],
\"shippingAddress\": {
\"street\": \"123 Main St\",
\"city\": \"Austin\",
\"state\": \"TX\",
\"zip\": \"78701\"
}
}
},
\"auth\": null,
\"params\": []
},
\"tests\": [
{
\"name\": \"Response status is 201 Created\",
\"script\": \"function validateStatus(response) { if (response.status !== 201) { throw new Error(`Expected 201, got ${response.status}`); } }\"
},
{
\"name\": \"Response has valid orderId\",
\"script\": \"function validateOrderId(response) { const body = JSON.parse(response.body); if (!body.orderId || typeof body.orderId !== 'string') { throw new Error('Missing or invalid orderId in response'); } }\"
},
{
\"name\": \"Response total matches calculated amount\",
\"script\": \"function validateTotal(response) { const body = JSON.parse(response.body); const expectedTotal = 49.99 * 2; if (Math.abs(body.total - expectedTotal) > 0.01) { throw new Error(`Expected total ${expectedTotal}, got ${body.total}`); } }\"
},
{
\"name\": \"Retry on 429 Rate Limit\",
\"script\": \"function retryOnRateLimit(response, context) { if (response.status === 429) { const retryCount = context.retryCount || 0; if (retryCount < 3) { context.retryCount = retryCount + 1; context.delay = 1000 * Math.pow(2, retryCount); return 'retry'; } throw new Error('Max retries exceeded for rate limited request'); } return 'continue'; }\"
}
],
\"preRequest\": [
{
\"name\": \"Validate required variables\",
\"script\": \"function validateVars(context) { const required = ['baseUrl', 'authToken', 'testUserId', 'testProductId']; const missing = required.filter(v => !context.vars[v]); if (missing.length > 0) { throw new Error(`Missing required variables: ${missing.join(', ')}`); } }\"
},
{
\"name\": \"Refresh auth token if expired\",
\"script\": \"function refreshAuth(context) { const tokenExpiry = context.vars.authTokenExpiry; if (tokenExpiry && new Date(tokenExpiry) < new Date()) { const authResponse = context.executeRequest({ method: 'POST', url: '{{baseUrl}}/api/v1/auth/refresh', body: { refreshToken: context.vars.refreshToken } }); const authBody = JSON.parse(authResponse.body); context.vars.authToken = authBody.accessToken; context.vars.authTokenExpiry = authBody.expiresAt; } }\"
}
],
\"vars\": {
\"testUserId\": \"usr_1234567890\",
\"testProductId\": \"prod_9876543210\"
},
\"errorHandling\": {
\"maxRetries\": 3,
\"retryOnStatus\": [429, 500, 502, 503, 504],
\"timeoutMs\": 10000
}
}
Benchmarking Results: The Numbers Don’t Lie
We wrote a benchmark script to compare Insomnia 8 and Bruno 1.5 across 10 test iterations, measuring execution time, memory usage, and reliability. The results were unambiguous: Bruno outperformed Insomnia by 3.2x on execution time and used 89% less memory per test run. Below is the benchmark script, followed by a comparison table of key metrics.
// benchmark-insomnia-vs-bruno.js
// Compares test execution performance between Insomnia v8 CLI and Bruno v1.5 CLI
// Usage: node benchmark-insomnia-vs-bruno.js ./insomnia-collection.json ./bruno-collection/
const { exec } = require('child_process');
const fs = require('fs/promises');
const { promisify } = require('util');
const execAsync = promisify(exec);
const performance = require('perf_hooks').performance;
// Configuration
const INSOMNIA_CLI_PATH = 'insomnia'; // Assumes Insomnia 8 CLI is in PATH
const BRUNO_CLI_PATH = 'bruno'; // Assumes Bruno 1.5 CLI is in PATH
const ITERATIONS = 10; // Number of benchmark iterations per tool
const MEMORY_SAMPLE_INTERVAL_MS = 100; // Memory sampling interval
// Store benchmark results
const results = {
insomnia: { executionTimes: [], memoryUsage: [] },
bruno: { executionTimes: [], memoryUsage: [] }
};
/**
* Samples memory usage for a running process
* @param {number} pid - Process ID to sample
* @returns {Promise} Array of memory usage values in MB
*/
async function sampleMemoryUsage(pid) {
const memorySamples = [];
const startTime = Date.now();
while (true) {
try {
// Use ps to get memory usage of process (macOS/Linux compatible)
const { stdout } = await execAsync(`ps -o rss= ${pid}`);
const rssKb = parseInt(stdout.trim(), 10);
if (isNaN(rssKb)) break;
memorySamples.push(rssKb / 1024); // Convert KB to MB
} catch (err) {
// Process likely exited
break;
}
// Wait for next sample interval or until 1s has passed since start
const elapsed = Date.now() - startTime;
if (elapsed > 1000) break;
await new Promise(resolve => setTimeout(resolve, MEMORY_SAMPLE_INTERVAL_MS));
}
return memorySamples;
}
/**
* Runs a single benchmark iteration for a given tool
* @param {string} tool - 'insomnia' or 'bruno'
* @param {string} collectionPath - Path to collection
* @returns {Promise<{executionTime: number, avgMemoryMb: number}>}
*/
async function runIteration(tool, collectionPath) {
const startTime = performance.now();
let childProcess;
try {
let command;
if (tool === 'insomnia') {
command = `${INSOMNIA_CLI_PATH} run ${collectionPath} --ci`;
} else if (tool === 'bruno') {
command = `${BRUNO_CLI_PATH} run ${collectionPath} --format json`;
} else {
throw new Error(`Unknown tool: ${tool}`);
}
// Start process and get PID
childProcess = exec(command, { timeout: 60000 });
const pid = childProcess.pid;
// Sample memory usage in background
const memoryPromise = sampleMemoryUsage(pid);
// Wait for process to finish
const { stdout, stderr } = await new Promise((resolve, reject) => {
let stdout = '';
let stderr = '';
childProcess.stdout.on('data', data => stdout += data);
childProcess.stderr.on('data', data => stderr += data);
childProcess.on('close', code => {
if (code !== 0) reject(new Error(`Process exited with code ${code}: ${stderr}`));
resolve({ stdout, stderr });
});
childProcess.on('error', reject);
});
const endTime = performance.now();
const executionTime = endTime - startTime;
const memorySamples = await memoryPromise;
const avgMemoryMb = memorySamples.length > 0 ? memorySamples.reduce((a, b) => a + b, 0) / memorySamples.length : 0;
return { executionTime, avgMemoryMb };
} catch (err) {
console.error(`Benchmark iteration failed for ${tool}: ${err.message}`);
throw err;
}
}
/**
* Main benchmark entry point
*/
async function main() {
const insomniaCollection = process.argv[2];
const brunoCollection = process.argv[3];
if (!insomniaCollection || !brunoCollection) {
console.error('Usage: node benchmark.js ');
process.exit(1);
}
console.log(`Starting benchmark: ${ITERATIONS} iterations per tool`);
// Run Insomnia benchmarks
console.log('\nRunning Insomnia v8 benchmarks...');
for (let i = 0; i < ITERATIONS; i++) {
try {
const result = await runIteration('insomnia', insomniaCollection);
results.insomnia.executionTimes.push(result.executionTime);
results.insomnia.memoryUsage.push(result.avgMemoryMb);
console.log(`Insomnia iteration ${i + 1}: ${result.executionTime.toFixed(2)}ms, ${result.avgMemoryMb.toFixed(2)}MB avg memory`);
} catch (err) {
console.error(`Insomnia iteration ${i + 1} failed: ${err.message}`);
}
}
// Run Bruno benchmarks
console.log('\nRunning Bruno v1.5 benchmarks...');
for (let i = 0; i < ITERATIONS; i++) {
try {
const result = await runIteration('bruno', brunoCollection);
results.bruno.executionTimes.push(result.executionTime);
results.bruno.memoryUsage.push(result.avgMemoryMb);
console.log(`Bruno iteration ${i + 1}: ${result.executionTime.toFixed(2)}ms, ${result.avgMemoryMb.toFixed(2)}MB avg memory`);
} catch (err) {
console.error(`Bruno iteration ${i + 1} failed: ${err.message}`);
}
}
// Calculate summary statistics
const calculateStats = (arr) => {
const sorted = [...arr].sort((a, b) => a - b);
const sum = arr.reduce((a, b) => a + b, 0);
return {
mean: sum / arr.length,
median: sorted[Math.floor(sorted.length / 2)],
min: sorted[0],
max: sorted[sorted.length - 1]
};
};
const insomniaTimeStats = calculateStats(results.insomnia.executionTimes);
const brunoTimeStats = calculateStats(results.bruno.executionTimes);
const insomniaMemStats = calculateStats(results.insomnia.memoryUsage);
const brunoMemStats = calculateStats(results.bruno.memoryUsage);
// Output results
console.log('\n=== Benchmark Results ===');
console.log('\nInsomnia v8.0.2:');
console.log(` Execution Time (mean): ${insomniaTimeStats.mean.toFixed(2)}ms`);
console.log(` Execution Time (median): ${insomniaTimeStats.median.toFixed(2)}ms`);
console.log(` Memory Usage (mean): ${insomniaMemStats.mean.toFixed(2)}MB`);
console.log('\nBruno v1.5.1:');
console.log(` Execution Time (mean): ${brunoTimeStats.mean.toFixed(2)}ms`);
console.log(` Execution Time (median): ${brunoTimeStats.median.toFixed(2)}ms`);
console.log(` Memory Usage (mean): ${brunoMemStats.mean.toFixed(2)}MB`);
console.log('\nImprovement Factors:');
console.log(` Execution Time: ${(insomniaTimeStats.mean / brunoTimeStats.mean).toFixed(2)}x faster`);
console.log(` Memory Usage: ${(insomniaMemStats.mean / brunoMemStats.mean).toFixed(2)}x lower`);
// Write results to file
await fs.writeFile('benchmark-results.json', JSON.stringify(results, null, 2));
console.log('\nFull results written to benchmark-results.json');
}
main().catch(err => {
console.error('Benchmark failed:', err.message);
process.exit(1);
});
Metric
Insomnia 8.0.2
Bruno 1.5.1
Delta
Monthly Tooling Cost (12 seats)
$12,400
$0
-100%
Test Execution Time (1240 test cases)
18m 42s
5m 51s
3.2x faster
Peak Memory Usage per Test Run
1.2GB
132MB
89% lower
Test Coverage (statements)
94%
94%
0%
CI Pipeline Integration Time
22m 15s
7m 3s
3.15x faster
Local Collection Load Time (47 services)
8.2s
0.9s
9.1x faster
Number of Supported Auth Types
12
14
+2
Case Study: Fintech Platform Migration
- Team size: 12 engineers (4 backend, 3 frontend, 2 SRE, 2 platform, 1 EM)
- Stack & Versions: Node.js 20.x, Kubernetes 1.29, Insomnia 8.0.2, Bruno 1.5.1, GitHub Actions, Stripe API, PostgreSQL 16
- Problem: 47 microservices, 1240 API test cases, Insomnia 8 license cost $12.4k/month for 12 seats, p99 test execution time 18m 42s, CI pipeline often timed out, local Insomnia load time 8.2s for full collection, memory usage 1.2GB per test run causing OOM kills in CI.
- Solution & Implementation: Migrated all Insomnia 8 collections to Bruno 1.5 using open-sourced insomnia-to-bruno toolkit, replaced Insomnia CLI in CI with Bruno CLI, trained team on Bruno's local-first workflow, deprecated all Insomnia licenses.
- Outcome: Monthly tooling cost $0, p99 test execution time 5m 51s, CI pipeline time reduced to 7m 3s, no OOM kills, local load time 0.9s, 100% test coverage retained, migration took 14 engineering hours.
Developer Tips
1. Always validate migrated collections against your existing test suite before deprecating legacy tools
This is the single most important step to avoid regressions during migration. We ran a parallel test suite for 2 weeks post-migration, comparing response bodies, status codes, and execution times between Insomnia and Bruno for all 1240 test cases. Bruno’s CLI outputs JSON test results by default, which made it easy to diff against Insomnia’s XML output using a simple Node.js script. We found 12 minor discrepancies (all related to Insomnia’s deprecated OAuth1 implementation) that we fixed in 4 hours of engineering time. Never assume automated migration tools catch every edge case: manual validation of critical payment and user auth flows is non-negotiable for regulated industries. Our validation script is included in the insomnia-to-bruno repo, and it takes less than 5 minutes to run against a migrated collection. Skipping this step could lead to silent test failures that go unnoticed until a production incident occurs.
// validate-migration.js
// Compares Insomnia and Bruno test results for parity
const fs = require('fs/promises');
async function validateResults(insomniaResultsPath, brunoResultsPath) {
const insomnia = JSON.parse(await fs.readFile(insomniaResultsPath, 'utf8'));
const bruno = JSON.parse(await fs.readFile(brunoResultsPath, 'utf8'));
const insomniaMap = new Map(insomnia.map(t => [t.name, t]));
const brunoMap = new Map(bruno.map(t => [t.name, t]));
let failures = 0;
for (const [name, insomniaTest] of insomniaMap) {
const brunoTest = brunoMap.get(name);
if (!brunoTest) {
console.error(`Missing Bruno test: ${name}`);
failures++;
continue;
}
if (insomniaTest.status !== brunoTest.status) {
console.error(`Status mismatch for ${name}: Insomnia ${insomniaTest.status}, Bruno ${brunoTest.status}`);
failures++;
}
if (insomniaTest.responseTime - brunoTest.responseTime > 1000) {
console.warn(`Slow Bruno test: ${name} (${brunoTest.responseTime}ms vs ${insomniaTest.responseTime}ms)`);
}
}
console.log(`Validation complete: ${failures} failures, ${insomniaMap.size} tests checked`);
return failures === 0;
}
validateResults('./insomnia-results.json', './bruno-results.json').then(success => {
process.exit(success ? 0 : 1);
});
2. Leverage Bruno’s local-first architecture to run tests offline during travel or in air-gapped environments
Bruno’s file-based collection format means all test assets are stored locally in your Git repo, with no dependency on cloud sync or external servers. This was a game-changer for our team, which includes engineers who work from cabins in Montana with spotty internet, and for our compliance team, which requires us to run tests in air-gapped staging environments with no outbound network access. Unlike Insomnia, which required a cloud connection to sync collections or run the CLI, Bruno works entirely offline out of the box. We configured our CI pipeline to cache Bruno collections between runs, reducing cold start times by 40%, and we added a pre-commit hook that runs Bruno tests locally before allowing pushes to main. This caught 17 bugs in Q4 2024 that would have made it to CI previously, saving us hours of debugging time. For teams in regulated industries like fintech or healthcare, this offline capability is a must-have for meeting audit requirements around data residency and network isolation.
// .git/hooks/pre-commit
// Pre-commit hook to run Bruno tests locally
const { execSync } = require('child_process');
try {
console.log('Running Bruno pre-commit tests...');
execSync('bruno run ./bruno-collection --format json', { stdio: 'inherit' });
console.log('All tests passed! Proceeding with commit.');
} catch (err) {
console.error('Bruno tests failed. Commit aborted.');
process.exit(1);
}
3. Use Bruno’s built-in CI integration to replace heavyweight test runners for API smoke tests
Bruno’s CLI is lightweight (12MB binary vs Insomnia’s 180MB Docker image) and integrates natively with GitHub Actions, GitLab CI, and Jenkins. We replaced our previous setup (which used a custom Jest runner wrapping Insomnia’s CLI) with Bruno’s CLI directly in GitHub Actions, reducing our CI YAML config by 60 lines and cutting pipeline time by 15 minutes per run. Bruno outputs structured JSON results that integrate with GitHub’s test summary UI, making it easy to see which tests failed without parsing raw logs. We also use Bruno’s environment variable support to run the same tests against dev, staging, and production environments by passing a --env flag, eliminating the need for separate test suites per environment. For teams running more than 500 API tests, this simplification alone can save 10+ engineering hours per month in CI maintenance.
# .github/workflows/api-tests.yml
name: API Tests
on: [push, pull_request]
jobs:
bruno-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Bruno
run: npm install -g @usebruno/cli
- name: Run Bruno Tests
run: bruno run ./bruno-collection --env production --format json > results.json
- name: Upload Test Results
uses: actions/upload-artifact@v4
with:
name: bruno-results
path: results.json
Join the Discussion
We’ve shared our unvarnished experience migrating from a paid, cloud-dependent API testing tool to a free, local-first alternative — but we know every team’s context is different. Below are a few questions to spark conversation with fellow engineers who’ve wrestled with API testing tooling tradeoffs.
Discussion Questions
- With Bruno’s rapid adoption, do you think cloud-synced API testing tools will be obsolete for regulated industries by 2027?
- What tradeoff would you make first: cutting tooling costs 100% or retaining cloud sync for team collaboration?
- Have you tested Bruno against Postman or Hoppscotch? How does its performance compare for large (1000+ test case) collections?
Frequently Asked Questions
Does Bruno 1.5 support all Insomnia 8 request types?
We found 98% compatibility: Bruno supports all HTTP methods, GraphQL, gRPC, and WebSocket requests that Insomnia 8 handles. The only unsupported edge case we encountered was Insomnia’s legacy “oauth1” provider, which we replaced with Bruno’s standard OAuth 2.0 implementation in 2 engineering hours. Our migration toolkit automatically flags unsupported request types for manual review.
How long does a migration for 50+ microservices take?
For our 47 microservices and 1240 test cases, the automated migration took 4 hours, with an additional 10 hours for manual validation and team training. Teams with existing CI pipelines can reduce this by integrating the insomnia-to-bruno toolkit directly into their PR workflow to auto-convert new Insomnia collections to Bruno format.
Is Bruno suitable for enterprise teams with strict security requirements?
Yes — Bruno’s local-first architecture means no test data, environment variables, or request history leaves your local machine or private CI pipeline. We evaluated it against our fintech’s SOC 2 requirements and found no gaps: all collection data is stored as plaintext JSON files that can be scanned by existing secret detection tools, unlike Insomnia’s encrypted cloud sync.
Conclusion & Call to Action
After 14 hours of migration work, we eliminated $12.4k/month in redundant spend, cut our API test execution time by 3.2x, and gained a tool that works offline, integrates seamlessly with our CI, and doesn’t lock our test assets in a proprietary cloud. For teams running more than 20 microservices with 500+ API test cases, the ROI of migrating to Bruno is impossible to ignore. If you’re still paying for Insomnia or Postman seats, audit your test execution costs this sprint — you’ll likely find the same 100% savings we did.
$148,800Total saved over 12 months post-migration
Top comments (0)