Every time I work with an API, I go through two phases.
First, I’m exploring — sending requests, checking responses, and figuring out how things actually behave. For years, this happened in Postman, or curl, or a scratch file hidden somewhere.
Then later, when I need to make sure things keep working, I move to a test file and rewrite the same requests from scratch. By that point, I’ve already forgotten half of what I learned during the exploration phase.
I kept doing this until I realized the problem wasn't laziness — it's that these two phases happen in completely different tools with completely different formats. The exploration work is always throwaway.
That’s the gap I wanted to close.
The problem with "UI-First" Exploration
Yes, Postman already solves part of this. If all you want is to send a request and inspect the response, it works.
But once the workflow gets more real, I want the thing I write during exploration to already be code: code in git, code I can review in a PR, code that can use normal npm packages, and code that runs in CLI and CI later.
That matters even more now because code works much better with AI tools. An agent can generate, edit, and refactor TypeScript much more easily than it can maintain a click-through UI workflow.
The idea: exploration and verification should be the same workflow
What if exploration and testing were the same thing?
Not "export your Postman collection as tests." Not "record and replay." Just: write a request and assertion in a real code file, run it with one click, and keep it.
By using TypeScript as the exploration tool, you get:
- Full npm power: Need a random UUID?
crypto.randomUUID(). Need a mock user? Usefaker. - Type Safety: Your IDE tells you if you misspelled a header or a parameter before you even hit "Send."
- Refactor-ability: Want to change a URL across 50 tests?
Cmd+Fand replace. That's just normal code — no special scripting sandbox needed.
What I built: Glubean
I built a workflow where the distance between "I'm just trying this" and "I should keep this in the repo" is much smaller.
One test, one click, one trace:
import { test } from "@glubean/sdk";
export const getUser = test("get-user", async (ctx) => {
const res = await ctx.http.get("https://dummyjson.com/users/1");
ctx.expect(res).toHaveStatus(200);
const body = await res.json();
ctx.expect(body.id).toBe(1);
});
If you've used Jest or Vitest in VSCode, you know the play button that appears next to each test. Same thing here — I click it, the test runs, and a Result Viewer opens right inside VSCode.
You see the method, URL, status code, and the full request/response body. It feels like Postman, but it's powered by the file you're actually editing.
Real workflow — shared state, steps, npm packages:
Once I have something worth keeping, I click the file-level play button and get a real result report. This is also where the workflow stops looking like Postman.
Instead of a saved request in a UI, I have normal TypeScript: I can use the builder API for a multi-step flow and pass state from one step to the next. If one step fails, the rest are skipped automatically.
import { test } from "@glubean/sdk";
const API = "https://dummyjson.com";
let userId;
// Simple test — fetch a known user
export const getUser = test("get-user", async (ctx) => {
const res = await ctx.http.get(`${API}/users/1`);
ctx.expect(res).toHaveStatus(200);
const body = await res.json();
userId = body.id;
ctx.log(`Loaded user: ${body.firstName} ${body.lastName}`);
});
// Builder test — reads shared userId, runs multi-step verification
export const verifyAndUpdate = test("verify-and-update")
.step("Check user profile", async (ctx) => {
const res = await ctx.http.get(`${API}/users/${userId}`);
ctx.expect(res).toHaveStatus(200);
const body = await res.json();
ctx.expect(body.firstName).toBeDefined();
return { originalName: body.firstName };
})
.step("Update user name", async (ctx, { originalName }) => {
const res = await ctx.http.put(`${API}/users/${userId}`, {
json: { firstName: "Updated" },
});
ctx.expect(res).toHaveStatus(200);
ctx.log(`Renamed ${originalName} → Updated`);
});
Note: Tests in the same file share module scope and run in export order —
getUserruns first and setsuserIdforverifyAndUpdate. Shared variables should be declared at module level but assigned only inside test callbacks. See Limitations for details.
The builder's .step() chain passes state between steps inside a single test. When I run the file, the result report shows each step's status and duration.
This is just a small slice of what the SDK does — schema validation, data-driven execution, custom metrics, retries, plugin support for browser, GraphQL, gRPC, and more. But the point of this post is the workflow, not the feature list.
The code I wrote while exploring was already real verification code. I didn't need to rewrite it or convert it into another format. That's the thing I was trying to get right.
Why it matters: The "Friction" Tax
The real cost of the old workflow isn't just the time spent rewriting. It's the stuff you never bother to turn into tests because the friction is too high.
You tried an edge case during exploration (e.g., "What if the price is negative?"), it worked, you moved on. Two months later it breaks. Nobody catches it because that exploration lived in a curl command buried in your terminal history or a temporary tab in Postman you forgot to save.
If trying something useful and keeping something useful are the same action, more of that work survives.
From VSCode to CI
Because it's just code, these files run in CLI and CI without any changes.
glubean run tests/
I didn't want one format for local exploration and another for automation. No more "exporting JSON" or "syncing collections." Your verification files live where they belong: in your repository, next to your source code.
The same file runs in your terminal and CI — no conversion, no export:
# Same file, same assertions — now in your terminal or CI pipeline.
# The result.json contains the exact same traces you see in the VSCode Result Viewer.
npx glubean run explore/dummyjson/smoke.test.ts --verbose --emit-full-trace --result-json ./smoke.result.json
Open any .result.json in VSCode and the Glubean extension renders it in the Result Viewer automatically — full trace inspection, assertions, and events, even when the test was run from the CLI.
That broader path is part of what I like about this workflow: start by exploring in VSCode, keep the useful parts as committed verification, run the same files in CLI/CI later, and only think about Cloud when you want help understanding failures, tracking metrics over time, and getting notified when behavior changes.
But this post is really about the first phase: exploration.
Where this is now
I'm building this as Glubean — a free and open source SDK + CLI + VSCode extension. The local workflow is free, and Cloud upload is optional. It's also designed so the platform does not need to know your secrets, but that deserves its own post. There's an AI angle too, but I don't want to overload this one: code-first exploration turns out to be a much better surface for AI authoring than click-through UI workflows, and I'll write about that separately.
If you want to try it:
- Install the Glubean VSCode extension. You may need to reload the VSCode window after installing.
- Create a
.test.jsfile (no project setup needed — just one file). - Write a
ctx.http.get()and click the play button.
For scratch mode limitations (TypeScript type errors, no .env support), see Limitations.
That’s the first step. Open a test file, write one request, and click play.
If you want runnable examples instead of toy snippets, I also put together a cookbook repo with patterns you can run in VSCode after a quick npm install.
If you’re still bouncing between Postman and a test suite, I’d love to hear what keeps you there.



Top comments (0)