DEV Community

Wilson Xu
Wilson Xu

Posted on

Build Your Own Postman Alternative: A Terminal-First API Testing CLI

Build Your Own Postman Alternative: A Terminal-First API Testing CLI

GUI-based API testing tools like Postman served us well for years, but a growing number of developers are trading their point-and-click workflows for something leaner: terminal-first API testing. The reasons are compelling — CLI tools integrate naturally with version control, play well with CI/CD pipelines, consume fewer resources, and keep developers in the flow state that context-switching to a GUI application inevitably breaks.

In this tutorial, we will build httpcli, a fully functional terminal-first API testing tool that reads .http files (the same format used by VS Code's REST Client extension), supports variables, environment management, request chaining, authentication helpers, response diffing, and syntax-highlighted output. By the end, you will have a tool that handles 90% of what Postman does — from your terminal.

Why Developers Are Moving to CLI-Based API Testing

The shift is not just about preference. It is about workflow efficiency:

  • Version control friendly. .http files are plain text. They diff cleanly, review easily in PRs, and live alongside the code they test.
  • CI/CD native. A CLI tool runs in any pipeline without headless browser hacks or desktop app installations.
  • Scriptable. Pipe output to jq, chain with shell scripts, integrate with monitoring — the Unix philosophy at work.
  • Resource efficient. No Electron app consuming 500MB of RAM to send a GET request.
  • Reproducible. Share a .http file and an .env file with a teammate. They run the exact same requests. No "import my collection" dance.

Tools like httpie, curl, and VS Code's REST Client have paved the way. We are going to build something that combines the best of all three.

Project Setup

Initialize the project and install the dependencies we need:

mkdir httpcli && cd httpcli
npm init -y
npm install undici chalk@5 dotenv yargs diff
Enter fullscreen mode Exit fullscreen mode

Here is what each dependency does:

Package Purpose
undici Fast, modern HTTP client for Node.js
chalk Terminal string styling and syntax highlighting
dotenv Environment variable loading from .env files
yargs CLI argument parsing
diff Response diffing between runs

Set "type": "module" in your package.json and add a bin entry:

{
  "name": "httpcli",
  "version": "1.0.0",
  "type": "module",
  "bin": {
    "httpcli": "./bin/cli.js"
  }
}
Enter fullscreen mode Exit fullscreen mode

Understanding the .http File Format

The .http file format (popularized by JetBrains and VS Code REST Client) is beautifully simple. Each request starts with a method and URL, followed by headers, an optional body, and requests are separated by ###:

@baseUrl = https://api.example.com
@token = Bearer abc123

### Get all users
GET {{baseUrl}}/users
Authorization: {{token}}
Content-Type: application/json

### Create a user
POST {{baseUrl}}/users
Authorization: {{token}}
Content-Type: application/json

{
  "name": "Wilson Xu",
  "email": "wilson@example.com"
}

### Get user by ID (chained from previous response)
GET {{baseUrl}}/users/{{createUser.id}}
Authorization: {{token}}
Enter fullscreen mode Exit fullscreen mode

Variables are declared with @name = value and referenced with {{name}}. Request separators (###) can include an optional name. This is the format we need to parse.

Parsing .http Files

The parser is the heart of our tool. It needs to handle variables, request separation, headers, and bodies:

// src/parser.js
import { readFileSync } from "fs";

export function parseHttpFile(filePath, envVars = {}) {
  const content = readFileSync(filePath, "utf-8");
  const lines = content.split("\n");

  const variables = { ...envVars };
  const requests = [];
  let current = null;
  let parsingBody = false;
  let bodyLines = [];

  for (const line of lines) {
    const trimmed = line.trim();

    // Variable declaration: @name = value
    const varMatch = trimmed.match(/^@(\w+)\s*=\s*(.+)$/);
    if (varMatch && !parsingBody) {
      variables[varMatch[1]] = interpolate(varMatch[2], variables);
      continue;
    }

    // Request separator
    if (trimmed.startsWith("###")) {
      if (current) {
        current.body = bodyLines.join("\n").trim() || undefined;
        requests.push(current);
      }
      const name = trimmed.replace(/^###\s*/, "").trim();
      current = { name, method: null, url: null, headers: {}, body: undefined };
      parsingBody = false;
      bodyLines = [];
      continue;
    }

    // Skip comments and empty lines before method
    if (trimmed.startsWith("#") || trimmed.startsWith("//")) continue;

    // HTTP method line: GET https://example.com
    const methodMatch = trimmed.match(
      /^(GET|POST|PUT|PATCH|DELETE|HEAD|OPTIONS)\s+(.+)$/i
    );
    if (methodMatch && !parsingBody) {
      if (!current) {
        current = { name: "", method: null, url: null, headers: {}, body: undefined };
      }
      current.method = methodMatch[1].toUpperCase();
      current.url = interpolate(methodMatch[2], variables);
      continue;
    }

    // Header line: Key: Value
    const headerMatch = trimmed.match(/^([\w-]+):\s*(.+)$/);
    if (headerMatch && current && current.method && !parsingBody) {
      current.headers[headerMatch[1]] = interpolate(headerMatch[2], variables);
      continue;
    }

    // Empty line signals start of body
    if (trimmed === "" && current && current.method && !parsingBody) {
      parsingBody = true;
      continue;
    }

    // Body content
    if (parsingBody) {
      bodyLines.push(interpolate(line, variables));
    }
  }

  // Push the last request
  if (current) {
    current.body = bodyLines.join("\n").trim() || undefined;
    requests.push(current);
  }

  return { requests, variables };
}

function interpolate(str, vars) {
  return str.replace(/\{\{(\w+(?:\.\w+)*)\}\}/g, (match, key) => {
    // Support nested keys like createUser.id
    const parts = key.split(".");
    let value = vars;
    for (const part of parts) {
      value = value?.[part];
    }
    return value !== undefined ? String(value) : match;
  });
}
Enter fullscreen mode Exit fullscreen mode

The parser handles four distinct sections of each request: variable declarations at the file level, the method/URL line, headers, and the body (everything after the first blank line following headers). The interpolate function supports dot-notation access for chained response values, which we will wire up later.

Making Requests with Undici

With parsed requests in hand, we need an executor that fires them off and captures detailed response data:

// src/executor.js
import { request } from "undici";

export async function executeRequest(req) {
  const startTime = performance.now();

  try {
    const options = {
      method: req.method,
      headers: req.headers,
    };

    if (req.body && ["POST", "PUT", "PATCH"].includes(req.method)) {
      options.body = req.body;
    }

    const response = await request(req.url, options);
    const elapsed = performance.now() - startTime;

    const contentType = response.headers["content-type"] || "";
    let body;

    if (contentType.includes("application/json")) {
      body = await response.body.json();
    } else {
      body = await response.body.text();
    }

    return {
      status: response.statusCode,
      headers: response.headers,
      body,
      elapsed: Math.round(elapsed),
      size: JSON.stringify(body).length,
      request: req,
    };
  } catch (error) {
    return {
      status: 0,
      headers: {},
      body: null,
      error: error.message,
      elapsed: Math.round(performance.now() - startTime),
      request: req,
    };
  }
}
Enter fullscreen mode Exit fullscreen mode

We use undici over the built-in fetch because it gives us more control over connection handling and performs better in CLI contexts where you might fire dozens of requests in sequence.

Pretty-Printing Responses with Syntax Highlighting

Terminal output should be informative and scannable. We use chalk to add color and structure:

// src/formatter.js
import chalk from "chalk";

const STATUS_COLORS = {
  2: chalk.green,
  3: chalk.yellow,
  4: chalk.red,
  5: chalk.bgRed.white,
};

export function formatResponse(result) {
  const statusColor = STATUS_COLORS[Math.floor(result.status / 100)] || chalk.white;
  const lines = [];

  // Status line
  lines.push(
    statusColor(`  HTTP ${result.status}`) +
    chalk.gray(` | ${result.elapsed}ms | ${formatBytes(result.size)}`)
  );

  if (result.error) {
    lines.push(chalk.red(`  Error: ${result.error}`));
    return lines.join("\n");
  }

  // Response headers (condensed)
  const importantHeaders = [
    "content-type", "x-request-id", "x-ratelimit-remaining", "cache-control"
  ];
  for (const h of importantHeaders) {
    if (result.headers[h]) {
      lines.push(chalk.gray(`  ${h}: ${result.headers[h]}`));
    }
  }

  // Body
  lines.push("");
  if (typeof result.body === "object") {
    lines.push(highlightJSON(JSON.stringify(result.body, null, 2)));
  } else {
    lines.push(chalk.white(`  ${result.body}`));
  }

  return lines.join("\n");
}

function highlightJSON(json) {
  return json
    .replace(/"([^"]+)":/g, (_, key) => `  ${chalk.cyan(`"${key}"`)}:`)
    .replace(/: "([^"]+)"/g, (_, val) => `: ${chalk.green(`"${val}"`)}`)
    .replace(/: (\d+)/g, (_, num) => `: ${chalk.yellow(num)}`)
    .replace(/: (true|false)/g, (_, bool) => `: ${chalk.magenta(bool)}`)
    .replace(/: (null)/g, () => `: ${chalk.gray("null")}`);
}

function formatBytes(bytes) {
  if (bytes < 1024) return `${bytes} B`;
  if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
  return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
}
Enter fullscreen mode Exit fullscreen mode

The JSON highlighter applies distinct colors to keys, string values, numbers, booleans, and nulls — making it easy to scan large API responses without piping through external tools.

Environment Management

Real-world API testing requires different configurations for development, staging, and production. We handle this with environment-specific .env files:

// src/env.js
import { readFileSync, existsSync } from "fs";
import { join, dirname } from "path";

export function loadEnvironment(httpFilePath, envName = "dev") {
  const dir = dirname(httpFilePath);
  const envFiles = [
    join(dir, `.env`),
    join(dir, `.env.${envName}`),
    join(dir, `.env.local`),
  ];

  const vars = {};

  for (const file of envFiles) {
    if (existsSync(file)) {
      const content = readFileSync(file, "utf-8");
      for (const line of content.split("\n")) {
        const trimmed = line.trim();
        if (trimmed && !trimmed.startsWith("#")) {
          const eqIndex = trimmed.indexOf("=");
          if (eqIndex > 0) {
            const key = trimmed.slice(0, eqIndex).trim();
            let value = trimmed.slice(eqIndex + 1).trim();
            // Strip surrounding quotes
            if (
              (value.startsWith('"') && value.endsWith('"')) ||
              (value.startsWith("'") && value.endsWith("'"))
            ) {
              value = value.slice(1, -1);
            }
            vars[key] = value;
          }
        }
      }
    }
  }

  return vars;
}
Enter fullscreen mode Exit fullscreen mode

The loading order matters: .env provides defaults, .env.{envName} overrides for the target environment, and .env.local adds machine-specific overrides (which you .gitignore). This mirrors the convention used by Vite, Next.js, and other modern tools — your team will already be familiar with it.

Example .env.staging file:

baseUrl=https://staging-api.example.com
token=Bearer staging_token_xyz
timeout=5000
Enter fullscreen mode Exit fullscreen mode

Supporting Authentication

Authentication is table-stakes for API testing. We support the three most common patterns:

// src/auth.js
export function applyAuth(req, authConfig) {
  if (!authConfig) return req;

  const headers = { ...req.headers };

  switch (authConfig.type) {
    case "bearer":
      headers["Authorization"] = `Bearer ${authConfig.token}`;
      break;

    case "basic": {
      const encoded = Buffer.from(
        `${authConfig.username}:${authConfig.password}`
      ).toString("base64");
      headers["Authorization"] = `Basic ${encoded}`;
      break;
    }

    case "apikey":
      if (authConfig.in === "header") {
        headers[authConfig.name] = authConfig.value;
      } else if (authConfig.in === "query") {
        const url = new URL(req.url);
        url.searchParams.set(authConfig.name, authConfig.value);
        return { ...req, url: url.toString(), headers };
      }
      break;
  }

  return { ...req, headers };
}
Enter fullscreen mode Exit fullscreen mode

Auth configuration can be specified in the .env file:

AUTH_TYPE=bearer
AUTH_TOKEN=eyJhbGciOiJIUzI1NiIs...

# Or for basic auth:
# AUTH_TYPE=basic
# AUTH_USERNAME=admin
# AUTH_PASSWORD=secret123

# Or for API key:
# AUTH_TYPE=apikey
# AUTH_KEY_NAME=X-API-Key
# AUTH_KEY_VALUE=your-api-key
# AUTH_KEY_IN=header
Enter fullscreen mode Exit fullscreen mode

The auth module reads these values and injects the appropriate headers (or query parameters) before each request fires. Headers defined directly in the .http file take precedence, so you can always override on a per-request basis.

Request Chaining

This is where our tool gets powerful. Request chaining lets you use values from one response in subsequent requests — essential for workflows like "create a resource, then fetch it by ID":

// src/chain.js
export function buildChainContext(results) {
  const context = {};

  for (const result of results) {
    if (result.request.name && typeof result.body === "object") {
      // Convert request name to camelCase key
      const key = toCamelCase(result.request.name);
      context[key] = flattenObject(result.body);
    }
  }

  return context;
}

function toCamelCase(str) {
  return str
    .replace(/[^a-zA-Z0-9\s]/g, "")
    .trim()
    .split(/\s+/)
    .map((word, i) =>
      i === 0
        ? word.toLowerCase()
        : word.charAt(0).toUpperCase() + word.slice(1).toLowerCase()
    )
    .join("");
}

function flattenObject(obj, prefix = "") {
  const result = {};
  for (const [key, value] of Object.entries(obj)) {
    const fullKey = prefix ? `${prefix}.${key}` : key;
    if (value && typeof value === "object" && !Array.isArray(value)) {
      Object.assign(result, flattenObject(value, fullKey));
    }
    result[fullKey] = value;
  }
  return result;
}
Enter fullscreen mode Exit fullscreen mode

With chaining, a .http file like this works seamlessly:

### Create user
POST {{baseUrl}}/users
Content-Type: application/json

{"name": "Test User", "email": "test@example.com"}

### Get created user
GET {{baseUrl}}/users/{{createUser.id}}
Enter fullscreen mode Exit fullscreen mode

The second request automatically receives the id field from the first request's response body. The chain context is rebuilt after each request, so later requests can reference any earlier response.

Response History and Diffing

Saving responses and comparing them across runs is invaluable for regression testing and debugging:

// src/history.js
import { writeFileSync, readFileSync, existsSync, mkdirSync } from "fs";
import { join } from "path";
import { createTwoFilesPatch } from "diff";
import chalk from "chalk";

const HISTORY_DIR = join(process.env.HOME, ".httpcli", "history");

export function saveResponse(requestName, result) {
  mkdirSync(HISTORY_DIR, { recursive: true });

  const sanitizedName = requestName.replace(/[^a-zA-Z0-9-_]/g, "_");
  const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
  const filename = `${sanitizedName}_${timestamp}.json`;

  const entry = {
    timestamp: new Date().toISOString(),
    request: {
      method: result.request.method,
      url: result.request.url,
    },
    response: {
      status: result.status,
      headers: result.headers,
      body: result.body,
      elapsed: result.elapsed,
    },
  };

  writeFileSync(join(HISTORY_DIR, filename), JSON.stringify(entry, null, 2));

  // Also save as "latest" for easy diffing
  const latestPath = join(HISTORY_DIR, `${sanitizedName}_latest.json`);
  const previousPath = join(HISTORY_DIR, `${sanitizedName}_previous.json`);

  if (existsSync(latestPath)) {
    const existing = readFileSync(latestPath, "utf-8");
    writeFileSync(previousPath, existing);
  }

  writeFileSync(latestPath, JSON.stringify(entry, null, 2));

  return filename;
}

export function diffResponses(requestName) {
  const sanitizedName = requestName.replace(/[^a-zA-Z0-9-_]/g, "_");
  const latestPath = join(HISTORY_DIR, `${sanitizedName}_latest.json`);
  const previousPath = join(HISTORY_DIR, `${sanitizedName}_previous.json`);

  if (!existsSync(latestPath) || !existsSync(previousPath)) {
    return null;
  }

  const latest = JSON.parse(readFileSync(latestPath, "utf-8"));
  const previous = JSON.parse(readFileSync(previousPath, "utf-8"));

  const latestBody = JSON.stringify(latest.response.body, null, 2);
  const previousBody = JSON.stringify(previous.response.body, null, 2);

  if (latestBody === previousBody) {
    return { changed: false };
  }

  const patch = createTwoFilesPatch(
    "previous",
    "latest",
    previousBody,
    latestBody,
    previous.timestamp,
    latest.timestamp
  );

  return {
    changed: true,
    diff: colorizeDiff(patch),
    previousTimestamp: previous.timestamp,
    latestTimestamp: latest.timestamp,
  };
}

function colorizeDiff(diff) {
  return diff
    .split("\n")
    .map((line) => {
      if (line.startsWith("+") && !line.startsWith("+++")) return chalk.green(line);
      if (line.startsWith("-") && !line.startsWith("---")) return chalk.red(line);
      if (line.startsWith("@@")) return chalk.cyan(line);
      return chalk.gray(line);
    })
    .join("\n");
}
Enter fullscreen mode Exit fullscreen mode

Every response is saved with a timestamp, and the tool maintains a _latest and _previous snapshot for each named request. Running with the --diff flag shows you exactly what changed between the current and last run — perfect for catching unexpected API behavior.

Wiring It All Together: The CLI

Now we connect every module into a cohesive CLI experience:

#!/usr/bin/env node
// bin/cli.js
import yargs from "yargs";
import { hideBin } from "yargs/helpers";
import chalk from "chalk";
import { parseHttpFile } from "../src/parser.js";
import { executeRequest } from "../src/executor.js";
import { formatResponse } from "../src/formatter.js";
import { loadEnvironment } from "../src/env.js";
import { applyAuth } from "../src/auth.js";
import { buildChainContext } from "../src/chain.js";
import { saveResponse, diffResponses } from "../src/history.js";

const argv = yargs(hideBin(process.argv))
  .usage("Usage: httpcli <file.http> [options]")
  .positional("file", { describe: "Path to .http file", type: "string" })
  .option("env", {
    alias: "e",
    describe: "Environment name (loads .env.{name})",
    type: "string",
    default: "dev",
  })
  .option("request", {
    alias: "r",
    describe: "Run only the named request",
    type: "string",
  })
  .option("diff", {
    alias: "d",
    describe: "Show diff against previous response",
    type: "boolean",
    default: false,
  })
  .option("verbose", {
    alias: "v",
    describe: "Show request details and all headers",
    type: "boolean",
    default: false,
  })
  .option("dry-run", {
    describe: "Parse and display requests without executing",
    type: "boolean",
    default: false,
  })
  .help()
  .parse();

async function run() {
  const filePath = argv._[0];
  if (!filePath) {
    console.error(chalk.red("Error: Please provide an .http file path"));
    process.exit(1);
  }

  // Load environment variables
  const envVars = loadEnvironment(filePath, argv.env);
  console.log(chalk.gray(`\n  Environment: ${argv.env}\n`));

  // Build auth config from env
  const authConfig = buildAuthFromEnv(envVars);

  // Parse the .http file
  const { requests, variables } = parseHttpFile(filePath, envVars);
  console.log(chalk.white(`  Found ${requests.length} request(s)\n`));

  // Filter to specific request if --request flag used
  let toRun = requests;
  if (argv.request) {
    toRun = requests.filter(
      (r) => r.name.toLowerCase().includes(argv.request.toLowerCase())
    );
    if (toRun.length === 0) {
      console.error(chalk.red(`  No request matching "${argv.request}"`));
      process.exit(1);
    }
  }

  // Execute requests sequentially (for chaining support)
  const results = [];
  const chainContext = {};

  for (const req of toRun) {
    // Re-interpolate URL and headers with chain context
    const interpolatedReq = reinterpolate(req, { ...variables, ...chainContext });

    // Apply auth
    const authedReq = applyAuth(interpolatedReq, authConfig);

    const label = req.name || `${req.method} ${req.url}`;
    console.log(chalk.bold.white(`  ${label}`));
    console.log(chalk.gray(`  ${authedReq.method} ${authedReq.url}`));

    if (argv.dryRun) {
      console.log(chalk.yellow("  [dry-run] Skipping execution\n"));
      continue;
    }

    // Execute
    const result = await executeRequest(authedReq);
    results.push(result);

    // Display formatted response
    console.log(formatResponse(result));

    // Update chain context
    Object.assign(chainContext, buildChainContext(results));

    // Save to history
    if (req.name) {
      const histFile = saveResponse(req.name, result);

      if (argv.diff) {
        const diffResult = diffResponses(req.name);
        if (diffResult?.changed) {
          console.log(chalk.yellow("\n  Response changed since last run:"));
          console.log(diffResult.diff);
        } else if (diffResult && !diffResult.changed) {
          console.log(chalk.green("  Response unchanged since last run."));
        }
      }
    }

    console.log("");
  }

  // Summary
  const passed = results.filter((r) => r.status >= 200 && r.status < 400).length;
  const failed = results.filter((r) => r.status >= 400 || r.status === 0).length;
  const totalTime = results.reduce((sum, r) => sum + r.elapsed, 0);

  console.log(chalk.bold("  Summary"));
  console.log(
    `  ${chalk.green(`${passed} passed`)} ${chalk.red(`${failed} failed`)} ${chalk.gray(`${totalTime}ms total`)}\n`
  );
}

function buildAuthFromEnv(env) {
  if (!env.AUTH_TYPE) return null;

  switch (env.AUTH_TYPE) {
    case "bearer":
      return { type: "bearer", token: env.AUTH_TOKEN };
    case "basic":
      return {
        type: "basic",
        username: env.AUTH_USERNAME,
        password: env.AUTH_PASSWORD,
      };
    case "apikey":
      return {
        type: "apikey",
        name: env.AUTH_KEY_NAME,
        value: env.AUTH_KEY_VALUE,
        in: env.AUTH_KEY_IN || "header",
      };
    default:
      return null;
  }
}

function reinterpolate(req, vars) {
  const interpolate = (str) =>
    str.replace(/\{\{(\w+(?:\.\w+)*)\}\}/g, (match, key) => {
      const parts = key.split(".");
      let value = vars;
      for (const part of parts) {
        value = value?.[part];
      }
      return value !== undefined ? String(value) : match;
    });

  const headers = {};
  for (const [k, v] of Object.entries(req.headers)) {
    headers[k] = interpolate(v);
  }

  return {
    ...req,
    url: interpolate(req.url),
    headers,
    body: req.body ? interpolate(req.body) : undefined,
  };
}

run().catch((err) => {
  console.error(chalk.red(`\n  Fatal: ${err.message}`));
  process.exit(1);
});
Enter fullscreen mode Exit fullscreen mode

Using the CLI

With everything wired up, here is what the workflow looks like.

Create a todos.http file:

@baseUrl = https://jsonplaceholder.typicode.com

### List todos
GET {{baseUrl}}/todos?_limit=3
Accept: application/json

### Get single todo
GET {{baseUrl}}/todos/1
Accept: application/json

### Create todo
POST {{baseUrl}}/todos
Content-Type: application/json

{
  "title": "Build a CLI tool",
  "completed": false,
  "userId": 1
}
Enter fullscreen mode Exit fullscreen mode

Run all requests:

httpcli todos.http --env production
Enter fullscreen mode Exit fullscreen mode

Run a specific request:

httpcli todos.http --request "Create todo"
Enter fullscreen mode Exit fullscreen mode

Run with diffing to catch changes:

httpcli todos.http --diff
Enter fullscreen mode Exit fullscreen mode

Dry-run to verify parsing without hitting the server:

httpcli todos.http --dry-run
Enter fullscreen mode Exit fullscreen mode

The output is clean and scannable:

  Environment: production

  Found 3 request(s)

  List todos
  GET https://jsonplaceholder.typicode.com/todos?_limit=3
  HTTP 200 | 245ms | 1.2 KB
  content-type: application/json; charset=utf-8

  [
    "id": 1,
    "title": "delectus aut autem",
    "completed": false
  ]

  Get single todo
  GET https://jsonplaceholder.typicode.com/todos/1
  HTTP 200 | 89ms | 234 B

  Create todo
  POST https://jsonplaceholder.typicode.com/todos
  HTTP 201 | 312ms | 98 B

  Summary
  3 passed 0 failed 646ms total
Enter fullscreen mode Exit fullscreen mode

Extending Further

The architecture we have built is deliberately modular. Here are natural extensions you can add:

Pre-request scripts. Add a // @pre comment block that runs JavaScript before the request — useful for generating timestamps, signatures, or HMAC tokens.

Assertions. Add // @assert status 200 or // @assert body.length > 0 comments that validate responses and return exit code 1 on failure — perfect for CI.

Parallel execution. When requests do not depend on each other (no chaining), fire them concurrently with Promise.all for faster test runs.

Export to curl. Generate the equivalent curl command for any request so teammates without the tool can reproduce it:

export function toCurl(req) {
  let cmd = `curl -X ${req.method}`;
  for (const [key, value] of Object.entries(req.headers)) {
    cmd += ` \\\n  -H '${key}: ${value}'`;
  }
  if (req.body) {
    cmd += ` \\\n  -d '${req.body}'`;
  }
  cmd += ` \\\n  '${req.url}'`;
  return cmd;
}
Enter fullscreen mode Exit fullscreen mode

Watch mode. Use fs.watch to re-run requests automatically when the .http file changes — a tight feedback loop during API development.

How It Compares

Feature Postman curl httpie httpcli (ours)
GUI Yes No No No
.http file support No No No Yes
Environment management Yes No No Yes
Request chaining Yes No No Yes
Response diffing No No No Yes
Version control friendly No Yes Yes Yes
CI/CD ready Partial Yes Yes Yes
Resource usage High Low Low Low

Our tool fills a gap: it provides the workflow features of Postman (environments, chaining, collections) with the terminal-native simplicity of curl.

Wrapping Up

We have built a fully functional, terminal-first API testing tool in under 400 lines of JavaScript. It reads standard .http files, supports variable interpolation and request chaining, manages multiple environments, handles authentication, pretty-prints responses with syntax highlighting, and diffs responses across runs.

The key architectural decisions that make this work:

  1. Standard file format. By adopting the .http format, your test files work in VS Code REST Client too — no vendor lock-in.
  2. Modular design. Each concern (parsing, execution, formatting, auth, chaining, history) is a separate module. Swap undici for fetch, replace chalk with ansi-colors — each piece is independent.
  3. Environment-first. Different .env files for different targets means one .http file serves development, staging, and production.
  4. Chain context. Flattening response bodies into a variable context makes request chaining intuitive without complex scripting.

The source code for httpcli is available to extend, customize, and make your own. The next time someone asks you to "test this endpoint," you will reach for your terminal — not a GUI.

Top comments (0)