Where It Started
It all began with yet another crazy idea: trying to write my own programming language, compiler, or transpiler…
That’s what happens when you get tired of forms and buttons.
Writing a compiler from scratch is pretty boring.
You already know the result, and what lies ahead is routine work: parsing, building an AST, and so on.
I got bored quickly.
Then I remembered Babel. I vaguely know its internals and am familiar with Babylon (now @babel/parser).
Roughly speaking, it parses code and produces an AST. We can extend JavaScript with features from newer standards not yet supported by all browsers.
Some people call it a compiler, others a transpiler — at this point, it’s mostly a matter of taste.
React Fatigue and Motivation
Around the same time, I was giving internal talks about new and interesting frameworks:
Adonis, Edge, Alpine, signals, Svelte, Solid, HTMX — and about why React is a hopelessly outdated framework that hasn’t achieved its original goals since 2013
(okay, that was a hot take — I keep promising to write a separate article about this).
Despite not liking React, I still have to use it in at least five projects.
A huge amount of code is written in it, so I’ll be dealing with it for a long time.
React does have advantages — especially where the virtual DOM is used not just for markup, but for native components or alternative render targets.
Still, declaring two variables just to get one reactive value using a hook became unbearable.
I wanted to change how components are written.
Should I write another Babel plugin? Another CoffeeScript?
I wanted free syntax and simple implementation.
Why LLMs Change Everything
No matter how good your AST parser is, language syntax is still limited by… syntax.
You need strict rules: quotes, braces, indentation.
The dream is implicit returns, declarative code, config-like syntax — and that brings us back to boring ASTs.
But what if we ask an LLM to transform one text into another by examples?
We describe the syntax and get JavaScript, C++, or even assembly as output — which we can then execute.
Example idea:
sumValues args
sum args
Translated into JS:
function sumValues(args: number[]): number {
return args.reduce((acc, x) => acc + x, 0);
}
LLMs change software development as dramatically as the transition from assembly to high-level languages.
But with one key difference:
LLMs introduce non-deterministic abstraction.
We can no longer store prompts in Git and expect the same result every time.
Sketch Programming vs Vibe Coding
Many are now talking about agent abstractions — agents that:
- fetch designs from Figma,
- create Jira tickets,
- open PRs,
- deploy projects.
That’s a big topic.
Here, I’m talking about a transitional approach — something between classic programming and vibe coding (where the model writes everything for you).
React Example
Declaring state in React today:
const [counter, setCounter] = useState<number>(0);
What I want instead:
state counter number = 0;
Or even:
counter = 0
But the explicit state keyword improves readability — mostly for humans.
The LLM can infer intent from JSX anyway, but extra context may reduce hallucinations.
Full Component Example
Sketch
// @sketch:reactComponent
// @ext:tsx
Component Count
props add = 0
state count = 0
effect {
console.log("Component mounted");
cleanup {
console.log("Cleanup");
}
}
<div onclick="count += add">Will add {add}</div>
<div>
Current count: {count}
</div>
Generated React Code
import React, { useState, useEffect } from 'react';
interface Props {
add?: number;
}
const CountComponent: React.FC<Props> = ({ add = 0 }) => {
const [count, setCount] = useState<number>(0);
useEffect(() => {
console.log("Component mounted");
return () => {
console.log("Cleanup");
};
}, []);
const handleClick = () => {
setCount(prev => prev + add);
};
return (
<div>
<div onClick={handleClick}>Will add {add}</div>
<div>Current count: {count}</div>
</div>
);
};
export default CountComponent;
Less boilerplate.
Better readability.
Full TypeScript support.
VS Code Plugin
To make this usable, the transformation must happen inside the editor — not in a chat window.
So I built a VS Code plugin that:
- runs on save,
- sends sketch files to ChatGPT via API,
- replaces them with valid code.
It’s open source and has bugs (e.g., config not reloading). Contributions welcome.
How It Works
- Initialize a project
- A
sketch/directory contains syntax definitions (Markdown) - A mirrored
src/directory outputs valid code - Files include tags like:
// @sketch:reactComponent
The plugin:
- creates an OpenAI Assistant,
- uploads sketch definitions into a vector store,
- uses them as transformation rules.
CSS Sketches Example
You can even invent new “dialects”, like CSS Next:
Nested selectors
.card {
color: black;
.title {
font-size: 20px;
}
}
→
.card { color: black; }
.card .title { font-size: 20px; }
Media queries as properties
.container {
width: 1200px;
width@max-768: 100%;
}
→
.container { width: 1200px; }
@media (max-width: 768px) {
.container { width: 100%; }
}
ChatGPT is surprisingly good at inventing these syntaxes.
Limitations
Yes:
- LLMs are non-deterministic
- It’s slower than a local transpiler
- Syntax highlighting is hard
- Caching and reproducibility matter
But these are engineering problems, not conceptual dead ends.
Conclusion
Sketch programming is not about killing compilers.
It’s about rethinking the entry point into code.
The idea is simple:
- stop rigidly binding to formal syntax;
- describe intent, structure, and patterns;
- delegate mechanical transformation to the model.
Unlike vibe coding:
- you have source text,
- explicit transformation rules,
- predictable output format.
This is a transitional layer between classic programming and an agent-driven future.
The real question is no longer:
Which language should we write in?
But:
How do we want to express ideas in code at all?


Top comments (0)