DEV Community

SEN LLC
SEN LLC

Posted on

I Built a CLI Runner for VS Code's `.http` Files in ~500 Lines of TypeScript

I Built a CLI Runner for VS Code's .http Files in ~500 Lines of TypeScript

A zero-dependency Node 20 CLI that parses and executes the .http format you already edit inside VS Code, JetBrains, or Neovim. No new syntax to learn, no GUI to open, no Rust to install — just run the file from a shell or CI.

🔗 GitHub: https://github.com/sen-ltd/http-runner

Screenshot

I had a pattern I kept repeating across projects. Next to src/ there would be a scratch.http file with a dozen requests I used while developing — auth flows, broken endpoints I was fixing, examples for teammates. I'd hit +click on "Send Request" inside VS Code and it worked. But then I wanted to run the same file from CI as a smoke test, and the options looked like this:

  • Rewrite in hurl. Great tool, but it's Rust, its own grammar, and means I have two sources of truth — the .http file I use interactively and the .hurl file CI runs.
  • Rewrite in Postman. Now there's a GUI and a cloud account between me and a plain text file that was working fine.
  • Rewrite in curl commands inside a shell script. By request three you've lost the shape of the file.
  • httpie. Interactive, not file-oriented.

None of these is wrong. They're just all different files. The one file I already have — the one open in my editor while I'm developing — is the .http file. I wanted a tool whose only job was to read it and run it.

So I built http-runner: a Node 20 CLI with zero runtime dependencies, written in strict TypeScript, that parses the VS Code REST Client subset of the .http format and executes the requests with the built-in fetch. About 500 lines of source, 52 vitest tests, and a 136 MB alpine Docker image. In this post I'm going to walk through the surprisingly well-designed .http format itself, the parser, the {{var}} interpolator, and the one testing trick that made the whole thing pleasant to work on: injecting fetch.

The .http format is better than it looks

I'd been using .http files for years without ever thinking about the grammar. When I sat down to write a parser, the first thing I did was try to describe the format precisely, and I realized how tight it actually is:

file         := (variable | request)*
variable     := "@" name "=" value NEWLINE
request      := separator? comment* name? request-line header* blank body?
separator    := "###" rest-of-line NEWLINE
name         := "# @name" ident NEWLINE
request-line := METHOD SP URL (SP HTTP-VERSION)? NEWLINE
header       := HEADER-NAME ":" HEADER-VALUE NEWLINE
blank        := NEWLINE
body         := any-text-until-next-separator-or-eof
Enter fullscreen mode Exit fullscreen mode

There are three things that make this work.

### is an unambiguous record separator. No HTTP message ever legitimately starts with ###. No header value legitimately has a line starting with ###. The grammar can be recovered from any cursor position: scan forward to the next ### and you know you're at a new request. It's the same trick YAML tried to pull off with --- but with a separator that looks even less like data.

A blank line means "body starts here". This is literally how the HTTP wire format already works (CRLF CRLF ends the headers). So the parser is mirroring a rule the author already knows from other contexts. You don't have to explain it.

Everything that isn't these three things is a comment. #, //, lines the parser doesn't recognize before a request line — all skipped. This is incredibly forgiving. You can annotate your .http file with anything and it just works.

Two design decisions feel subtle but made the parser shorter: (1) variables can be declared anywhere in the file, not just at the top, so the parser does a single forward pass collecting variables and requests interleaved; and (2) the body is "everything up to the next ###", including blank lines inside the body, which means you don't need to parse the body content at all. JSON, XML, form-encoded, a binary stub with a ### sentinel at the end — the parser treats them identically. It's just text between markers.

The parser

Here's the heart of it — a straightforward forward scan that keeps track of whether we're in header-land or body-land:

// parser.ts (abridged)

const METHODS = new Set([
  'GET', 'POST', 'PUT', 'PATCH', 'DELETE',
  'HEAD', 'OPTIONS', 'TRACE', 'CONNECT',
]);

const VAR_RE = /^@([A-Za-z_][A-Za-z0-9_-]*)\s*=\s*(.*)$/;
const NAME_RE = /^#\s*@name\s+([A-Za-z0-9_\-.]+)\s*$/;
const REQUEST_LINE_RE = /^([A-Z]+)\s+(\S+)(?:\s+(HTTP\/[0-9.]+))?\s*$/;

export function parse(source: string): ParsedFile {
  const lines = source.replace(/\r\n/g, '\n').split('\n');
  if (lines.length > 0 && lines[lines.length - 1] === '') lines.pop();

  const variables: Record<string, string> = {};
  const requests: RawRequest[] = [];
  let i = 0;

  while (i < lines.length) {
    const line = lines[i];

    if (isBlank(line) || isSeparator(line)) { i++; continue; }

    // Variable definition: collect and keep going.
    const v = VAR_RE.exec(line);
    if (v) {
      variables[v[1]] = v[2].trim();
      i++;
      continue;
    }

    // Comment-only line (but not a `# @name foo` marker): skip.
    if (isComment(line) && !NAME_RE.test(line)) { i++; continue; }

    // Otherwise: the start of a request block.
    // ...collect optional `# @name`...
    // ...parse request-line...
    // ...collect headers until blank...
    // ...collect body until `###` or EOF...
  }

  return { variables, requests };
}
Enter fullscreen mode Exit fullscreen mode

One thing I'd like to call out: the parser does not interpolate {{var}} references. It returns the raw URL {{host}}/users unchanged. I was tempted to interpolate during parsing — seems efficient, save an extra pass — but then the parser depends on environment variables, which means a unit test of the parser depends on the environment, which means I'd have to mock process.env in parser tests, which means parser tests are no longer about parsing.

Splitting parse from interpolate turned out to cost nothing. The parser is 200 lines, pure, tested against fixtures with zero environment setup. The interpolator is another 70 lines that runs after. Each has its own vitest file and it's the cleanest code boundary in the project.

The interpolator

// interpolator.ts

const EXPR_RE = /\{\{\s*([^}]+?)\s*\}\}/g;

export function interpolate(input: string, ctx: InterpolationContext): string {
  return input.replace(EXPR_RE, (_whole, raw: string) => {
    const expr = raw.trim();

    if (expr.startsWith('$env.')) {
      const key = expr.slice('$env.'.length);
      const val = ctx.env[key];
      if (val == null) {
        throw new InterpolationError(
          `environment variable "${key}" is not set`, key,
        );
      }
      return val;
    }

    if (!/^[A-Za-z_][A-Za-z0-9_-]*$/.test(expr)) {
      throw new InterpolationError(
        `invalid variable expression "${expr}"`, expr,
      );
    }
    if (!(expr in ctx.variables)) {
      throw new InterpolationError(
        `variable "${expr}" is not defined`, expr,
      );
    }
    return ctx.variables[expr];
  });
}
Enter fullscreen mode Exit fullscreen mode

This is boring on purpose. Two things the VS Code REST Client supports that I deliberately left out: random/UUID/date helpers ({{$guid}}, {{$randomInt 0 100}}) and chained-request references ({{previousRequest.response.body.$.token}}). They're real features in real tools. I didn't include them for two reasons.

First, there is no standard. Every tool does these slightly differently. If I implement the VS Code variant, my tool only parses VS Code's dialect. If I invent my own, I'm splitting the ecosystem further. Leaving them out keeps my file 100% compatible with every other .http runner in the world.

Second, {{$env.FOO}} is an escape hatch. Want a random UUID? uuidgen | API_ID=$(cat) http-runner api.http and reference {{$env.API_ID}}. Want a token from a previous request? Run that request first with --output json, jq the token, export it, run the next file. It's more verbose on the command line, but the .http files stay simple and portable.

The runner: one line of fetch, one testability choice

Here's the entire HTTP-execution logic:

// runner.ts (abridged)

export async function runRequest(
  req: InterpolatedRequest,
  opts: RunOptions,
): Promise<ResponseRecord> {
  const now = opts.now ?? (() => Date.now());
  const start = now();

  const controller = new AbortController();
  const timer = setTimeout(() => controller.abort(), opts.timeoutMs);

  try {
    const init: RequestInit = {
      method: req.method,
      headers: req.headers,
      signal: controller.signal,
    };
    if (req.body != null && req.method !== 'GET' && req.method !== 'HEAD') {
      init.body = req.body;
    }

    const res = await opts.fetch(req.url, init);
    const text = await res.text();
    const headers: Array<[string, string]> = [];
    res.headers.forEach((value, key) => headers.push([key, value]));

    return {
      ok: res.ok,
      status: res.status,
      statusText: res.statusText,
      headers,
      body: text,
      durationMs: now() - start,
      error: null,
      // ...echo back request fields for the formatter...
    };
  } catch (e) {
    return {
      ok: false,
      status: 0,
      // ...
      error: controller.signal.aborted
        ? `timeout after ${opts.timeoutMs}ms`
        : (e as Error).message,
    };
  } finally {
    clearTimeout(timer);
  }
}
Enter fullscreen mode Exit fullscreen mode

Two decisions worth flagging.

Node 20's built-in fetch is a zero-dependency superpower. In 2023 my package.json would have had node-fetch or undici pinned. In 2026 it has nothing. The string "dependencies" doesn't even appear in my package.json. The runtime Docker image doesn't have a node_modules directory — it's literally compiled dist/ and a copy of package.json. The final image is still 136 MB, all of it node:20-alpine itself, which means my code contributes almost zero bytes. That's a nice flex.

opts.fetch is injected. You can see opts.fetch(...) being called, not a global. Every test passes a fake fetch that returns a synthetic Response. Zero network, zero nock, zero msw, no fixture servers. Vitest runs all 52 tests in under 200 ms including type resolution. When I test "POST sends the body", I capture what my fake fetch was called with. When I test "errors don't throw", I make my fake fetch throw. When I test "non-2xx maps to ok=false", I return a fake Response with status: 500.

This one design choice — one extra parameter on one function — is the reason the project has 52 tests and no flakiness. I learned this pattern years ago from haskell-style "pass the effect in" thinking and I keep coming back to it.

Trying it

git clone https://github.com/sen-ltd/http-runner.git
cd http-runner
docker build -t http-runner .
docker run --rm http-runner --help

# Mount a local .http file into the container and run it
cat > /tmp/api.http << 'EOF'
### Fetch example
GET https://example.com
Accept: text/html

EOF
docker run --rm -v /tmp:/work http-runner /work/api.http
Enter fullscreen mode Exit fullscreen mode

Or without Docker:

npm install
npm run build
node dist/main.js tests/fixtures/simple.http
Enter fullscreen mode Exit fullscreen mode

Tradeoffs worth naming

  • No response assertions. hurl is very good at this and I'd use it if I wanted a full test framework. For http-runner, the pattern is --fail-on-error for exit-code gating or --output json | jq for anything richer.
  • No GraphQL helper. GraphQL is a POST with a JSON body; the existing format handles it fine, just more verbose than a dedicated GraphQL client.
  • No multipart or file-upload syntax. The VS Code REST Client has conventions for < ./path/to/file inside a body. I didn't implement them because my current use cases are all JSON APIs. Happy to add it later if someone actually sends me a .http file I can't run.
  • No OAuth flow helper. Same story as multipart. {{$env.ACCESS_TOKEN}} is the workaround — fetch your token with a separate script, export it, call http-runner.
  • No syntax highlighting in the CLI output. The output is colored by status code (2xx green, 4xx yellow, 5xx red), and JSON response bodies are pretty-printed, but I don't do language-aware highlighting. | bat -l json works great.

Closing

The thing I keep coming back to is how much this format gets right. ### as a record separator, blank line as header/body boundary, @var = at top-of-file scope, # @name foo as opt-in request identity. You can describe the whole grammar in ten lines of BNF. You can write a parser for it in 200 lines without tears. You can already edit it in every modern IDE. And as of Node 20 you can execute it with zero dependencies.

If you're keeping scratch API requests in your repo anyway — and you should — http-runner turns them into the smoke test you've been meaning to write. Source and Docker image on GitHub: https://github.com/sen-ltd/http-runner.

Top comments (0)