DEV Community

Cover image for I Put AI on a 400x240 1-Bit Screen. You Read It with a Crank.
Enchan
Enchan

Posted on • Originally published at enchan.hashnode.dev

I Put AI on a 400x240 1-Bit Screen. You Read It with a Crank.

CrankBot Demo

Two in the morning. The room is dark except for a rectangle of light no bigger than a playing card. I turn the crank and feel the small resistance in my fingertips — a faint clicking, something between mechanical precision and the warmth of a music box winding down. On the screen, text scrolls up one line at a time. Black and white. Nothing else.

The AI's words arrive at the speed of my hand.

I built CrankBot because every AI interface I've ever used feels the same. Text box. Send button. Browser window. Tokens streaming across a high-resolution display. The technology advances and the experience converges — toward what, exactly, no one pauses to ask. CrankBot goes the other way: AI on a Playdate, a $229 yellow handheld with a 400x240 monochrome display and a mechanical crank on the side.

The whole thing is ~500 lines of Lua and ~80 lines of Python. MIT licensed. And it changed the way I think about what it means to read an AI's response.

The impulse

At AI-MY, we make things under the theme of "anti-innovation / rediscovery" — using technology to uncover what technology's own progress has buried. Our Lo-Fi Camera (3rd place + Anthropic Award, Claude Hackathon 2025) converts subjects into pixel art and prints them on thermal paper. High resolution took something from photography — the space where imagination used to live, the blur that invited the viewer to fill in what wasn't there. Lo-Fi Camera hands it back.

CrankBot comes from the same impulse, aimed at a different loss.

What has the progress of AI overlooked? Not capability. Not speed. Something quieter: the act of actually reading what an AI says. We skim. We copy-paste. We regenerate. Words stream past and we barely look at them. Conversation with AI became transit — information passing through us on its way to somewhere else. Efficient, frictionless, and curiously hollow.

I wanted to know what would happen if you put the friction back.

The hardware

The Playdate is a strange device. 400x240 pixels, 1-bit — every pixel either black or white, no grayscale, no anti-aliasing. A D-pad. Two face buttons. And a crank: a small mechanical handle on the right side that you rotate with your thumb and forefinger. Panic, the company behind it, designed it for games. I made it talk to AI.

Something about the physical dimensions matters. The screen is roughly the size of a large postage stamp. When an AI response arrives on a display that small, there is nowhere for your eyes to wander. The text is everything. And to move through it, you turn the crank — a gesture that maps rotation to vertical scroll, pulling each line into view the way you'd pull thread from a spool.

The architecture

The design is deliberately simple. I wanted something a single person could read end-to-end in an afternoon:

┌─────────────┐      HTTPS       ┌─────────────┐
│   Playdate  │ ──────────────▶  │  API Server  │
│   (Lua)     │ ◀──────────────  │  (FastAPI)   │
└─────────────┘                  └──────┬───────┘
                                        │
                                        ▼
                                 ┌─────────────┐
                                 │   LLM API   │
                                 │  (Claude,   │
                                 │  GPT, etc.) │
                                 └─────────────┘
Enter fullscreen mode Exit fullscreen mode

You type a message on the Playdate's on-screen keyboard. The device sends it over HTTPS with Bearer token authentication to a self-hosted Python server. The server calls any OpenAI-compatible LLM — Claude, GPT, Gemini, Groq, whatever speaks the chat completions format. The response comes back as JSON. You crank through it.

Three components. Two configuration files. No framework, no build system beyond the Playdate SDK's pdc compiler. The entire client lives in one Lua file. The entire server lives in one Python file. I've worked on projects with more lines in the linting configuration than this has in its codebase.

Building for 1-bit

The constraints of this display are immediate and total. No grayscale means no anti-aliased text rendering. Fonts either work at small sizes on a 1-bit screen or they don't — there is no middle ground, no "it's readable enough." I used Playdate's built-in Roobert at 11px, one of the few fonts that stays crisp in pure black and white.

The usable text width after margins and a thin scrollbar is about 380 pixels. Word wrapping becomes the most important piece of text rendering on the device — there is no horizontal scrolling, and a line that runs off the edge of the screen simply vanishes:

local function wrapText(text, maxWidth, font)
    local lines = {}
    for segment in text:gmatch("([^\n]*)\n?") do
        if segment == "" then
            lines[#lines + 1] = ""
        else
            local line = ""
            for word in segment:gmatch("%S+") do
                local test = line == "" and word or (line .. " " .. word)
                if font:getTextWidth(test) > maxWidth and line ~= "" then
                    lines[#lines + 1] = line
                    line = word
                else
                    line = test
                end
            end
            if line ~= "" then lines[#lines + 1] = line end
        end
    end
    return lines
end
Enter fullscreen mode Exit fullscreen mode

Nothing clever here. Split by newlines, then by words, measure each addition against the maximum width. But on a display where every pixel is accountable, this plain function carries the entire legibility of the application. Get it wrong and the AI's words run off the edge of the world — or rather, the edge of 400 pixels, which on this device amounts to the same thing.

The Playdate SDK doesn't offer background networking. HTTP requests are callback-based but run on the main thread, which means the UI has to keep updating while the device waits for the server. I show a "Sending..." animation — a dot counter that increments every 15 frames, the simplest possible indication that something is happening on the other end of the wire. The wait can be a few seconds or ten, depending on the LLM. In that silence, you hold a device that does nothing except tell you it's trying. There is an odd intimacy to it.

The crank

This is the part I didn't expect.

playdate.getCrankChange() returns degrees of rotation since the last frame. I map it to scroll offset with a dampening multiplier:

local change = playdate.getCrankChange()
if change ~= 0 then
    scrollY = scrollY + change * 0.5
    scrollY = math.max(0, math.min(scrollY, maxScroll))
end
Enter fullscreen mode Exit fullscreen mode

The 0.5 keeps it from being too sensitive — on a screen this small, overshooting a line means losing your place entirely. The mechanical resistance of the crank itself does the rest. There's a physical cost to each line of text, a faint effort in the joint between thumb and forefinger. A typical AI response fills fifteen to thirty lines. You feel each one arrive.

I had imagined the crank as a navigation mechanism. What I found was that it turned reading into a bodily act — the kind of thing you feel in the tendons of your hand before you understand it as an idea. Your wrist moves, text appears. The speed of comprehension becomes the speed of your body. I noticed myself slowing down at sentences I would have skimmed past in a browser. Not because the interface forced me to — the crank can spin fast — but because the physical motion created a kind of attention I hadn't asked for.

Memory on a constrained device

The Playdate has limited memory. Conversation history uses a sliding window — the last six exchanges, sent as context with each new message:

local function buildHistoryJson()
    local parts = {}
    local start = math.max(1, #history - MAX_HISTORY + 1)
    for i = start, #history do
        local h = history[i]
        parts[#parts + 1] = '{"role":"' .. h.role
            .. '","content":"' .. jsonEscape(h.content) .. '"}'
    end
    return "[" .. table.concat(parts, ",") .. "]"
end
Enter fullscreen mode Exit fullscreen mode

No JSON library. The Playdate SDK doesn't include one, and importing a dependency for bracket-and-comma serialization felt wrong on a device this minimal. Just string concatenation — role, content, square brackets, done.

The AI remembers what you talked about. For a while. Like someone you sat next to at a gathering who eventually forgets the thread of your conversation — not because they stopped caring, but because the room is too large and the evening too long.

Six exchanges is enough to maintain a thread but not enough to build a history. The conversations dissolve. At first this felt like a limitation. Then I stopped minding. Most of the conversations I value in my life are ones I can't reproduce word for word anyway.

The server: 80 lines

The API server is a single-file FastAPI application. The interesting decisions are small and pragmatic:

Always return HTTP 200. The Playdate SDK has a behavior — call it a quirk, call it a bug — where non-200 responses don't trigger the response callback. The request just vanishes. So errors are wrapped in successful responses:

@app.exception_handler(HTTPException)
async def always_200(request, exc):
    return JSONResponse(
        status_code=200,
        content={"response": f"[Error] {exc.detail}", "error": True},
    )
Enter fullscreen mode Exit fullscreen mode

This is the kind of decision that would feel ugly in a larger system. Here, it's the only decision that works.

Any LLM provider. The server uses the OpenAI client library with a configurable base_url. Point it at Anthropic, OpenAI, Groq, OpenRouter — anything that speaks the chat completions format:

# Anthropic (default)
export LLM_BASE_URL=https://api.anthropic.com/v1
export LLM_API_KEY=sk-ant-...
export LLM_MODEL=claude-sonnet-4-20250514

# Or swap in any OpenAI-compatible provider
export LLM_BASE_URL=https://api.openai.com/v1
export LLM_API_KEY=sk-...
export LLM_MODEL=gpt-4o-mini
Enter fullscreen mode Exit fullscreen mode

Enforced brevity. A system prompt tells the AI to stay under 300 characters. On a 400x240 screen, a wall of text isn't just hard to read — it's hostile:

SYSTEM_PROMPT = (
    "You are CrankBot, a friendly AI chatbot living inside a "
    "Playdate game console. Keep responses concise: UNDER 300 "
    "characters, because the screen is very small. Be casual, "
    "witty, and fun. Use only ASCII characters."
)
Enter fullscreen mode Exit fullscreen mode

The Playdate sends a Bearer token with every request. Simple, but it prevents the internet from racking up your API bill. You'll need HTTPS between the Playdate and your server — I use nginx with Let's Encrypt as a reverse proxy.

What it's like

Here's an actual conversation from my Playdate, late at night:

> What is the meaning of life?

That's a big question for a tiny screen.
Maybe meaning comes from the small
things - like cranking through a chat
with an AI on a game console at 2 AM.

> Fair point.

I try. Ask me something easier next
time. Like what's 2+2. I'm good at
that one.
Enter fullscreen mode Exit fullscreen mode

The whole experience is unexpectedly quiet. The 1-bit screen, the mechanical crank, the deliberate slowness — everything that makes modern AI interfaces feel anxious is absent. No streaming tokens racing across the display. No progress bar filling up. No sidebar of previous conversations reminding you of all the other things you've asked. You wait. The response arrives. You crank through it, line by line. That's all.

And somehow it feels solemn. I didn't expect that word to apply to an AI chatbot on a game console, but there it is.

I didn't trust the feeling at first. The idea that constraints improve experience — it could be a convenient lie, the kind of story a maker tells to justify a limitation they couldn't fix. I've made that kind of excuse before. But turning the crank late at night, something kept happening: the smallness of the screen changed the weight of the words. Text that would scroll past in a browser became something my hand was physically pulling into view. The gesture was closer to turning a page than to scrolling a feed.

Then there's the keyboard. The Playdate's on-screen keyboard wasn't designed for conversation. The D-pad moves a cursor across a grid of letters, one letter at a time. Typing "What is the meaning of life?" takes real effort — maybe forty seconds of careful navigation, the pad clicking softly under your thumb as you hunt for each character.

It works. But the slowness changes the nature of the question you ask. When typing costs effort, you stop asking things you don't care about. You choose your words. You send shorter messages. The asymmetry is striking: you labor to compose a question, and then you labor — differently, with the crank — to read the answer. Both ends of the conversation have weight. Both ends cost something. I can't remember the last time that was true of a chat interface.

What constraints give back

Ivan Illich wrote about "convivial tools" — technologies whose structure invites a particular quality of use rather than merely enabling function. A bicycle is convivial in a way that a highway is not. I read that line years ago and forgot it. Then I built CrankBot and my thumb started to ache after twenty minutes of use, and the word "convivial" came back to me unbidden — because the ache was not a complaint. It was the body's way of saying: I am here, participating in this.

Not because analog is better than digital. That argument is usually nostalgia dressed in theory. But a physical intermediary between your body and an AI's language creates a rhythm that neither side controls entirely. The scrollbar is the modern default. The crank is something else — a deliberate friction, a speed limit made of metal and spring.

A low-resolution photograph sometimes feels more real than a high-resolution one. Not more accurate — more real. Because it demands that you bring something of your own to the image. Something similar happens when you read an AI's response one crank-turn at a time on a 1-bit display. Less information arrives, and in the gap, something returns. Not attention, exactly. Not patience. Something between the two, or before them — something that doesn't have a name yet, and might not need one.

Setup

If you want to try this:

  • A Playdate ($229)
  • The Playdate SDK (free)
  • Python 3 + an LLM API key (Anthropic, OpenAI, etc.)
  • A server with HTTPS (nginx + Let's Encrypt works)

Clone the repo, set your server hostname and API token in main.lua, build with pdc, sideload to your Playdate. Three files to understand, two to configure.

github.com/Narratify/CrankBot — MIT licensed.

What's ahead

  • Voice input via the Playdate's microphone — speech-to-text, then AI, then crank through the response
  • Crank-based input — rotating the crank to select from predefined prompts instead of typing
  • Local model support — running a small model on the server for faster responses

If you build something with CrankBot, I'd like to hear about it.


CrankBot is made by Enchan at AI-MY Product Publishing. The Playdate is a product of Panic.

Top comments (0)