DEV Community

Cover image for Bring your own phosphor: thirteen problems Claude Code couldn't solve without me
Jordan Forrest
Jordan Forrest

Posted on

Bring your own phosphor: thirteen problems Claude Code couldn't solve without me

The model gave me neon green.

Not phosphor green. Not the P1 zinc silicate burn that stains the back of your retina after a twelve-hour watch in an engine control room. Neon. The hex-code green of a Halloween website. Flat. Dead. No electron beam. No decay curve. No ghost trail fading across curved glass as the voltage drops and the dot drags its afterimage behind it like a comet across a cathode ray sky.

Three weeks ago I built the image pipeline. The phosphor fight started immediately. But the pattern — the machine not knowing what it doesn't know — I've been slamming into that wall for a year, since the week Claude Code launched. Thirteen tools later, it's still the same wall. Different shape each time. Same gap.

The machine doesn't know what it doesn't know. But I do. Three hundred and sixty-five verified sea days on Canadian Coast Guard icebreakers taught me what green looks like when it's burning at sixty hertz on an alarm display bolted to vibrating steel. The model will give you green. I'll give you P1.

That gap — between the model's green and the real green — is where I build.

The raw AI-generated control room scene — every gauge, every CRT display, every cable run is technically plausible because the prompt carried thirty years of knowing what these rooms actually look like.


The pipeline

Thirteen open-source skills for Claude Code. All free. Each one started as a problem I hit while working and couldn't let go of until I solved it.

Two of them built that control room.

OPTIC generates images through sequential grounding. Not a single prompt. A pipeline. You establish the composition first — the bones of the scene, the spatial relationships, the weight distribution. Then you layer the lighting. Then the material properties. Then the fine details. Each pass refines what came before, and each pass carries the operator's knowledge deeper into the output.

I didn't prompt "a cool sci-fi control room." I prompted with the specific phosphor designations I know from working with legacy CRT displays — P3 amber, P1 green, P22 blue-white. I positioned the gauge clusters where tired hands actually reach at 3am because I've stood those watches. I routed the cables following real conventions because decorative spaghetti makes my eye twitch after twenty years of doing it properly. Oil stains on the deck plates. Rust creeping under chipped paint on every pipe flange. The grime that never washes off.

The AI drew it. I designed it.

LOCUS takes that static image and makes it breathe.


Making it alive

A generated image sits there. You look at it, you think "nice," you scroll past. That's the problem with AI art right now — everywhere and static. The market is flooded with generators. Nobody's making the output do anything.

LOCUS CSI — Contextual State Injection. Define click/hover zones on any image. Each zone swaps between base and active states with configurable triggers, blend modes, and transitions.

I mapped hotspot polygons onto each CRT screen. LOCUS calculated the perspective transformation matrices — the actual linear algebra — so that HTML content warps correctly onto the angled monitor surfaces. Not a texture. Live text. Rendered in the browser. Perspective-correct on a surface the AI painted.

LOCUS IQM — Interactive Quad Mapping. Map screen regions onto AI-generated images. Drag corners to reshape quads. Exports percentage coordinates for resolution-independent perspective warp.

The desk lamp toggles on click. The gauges respond to hover with their readings. A debug overlay shows the wireframe geometry of every mapped zone. The whole thing runs in a browser. No frameworks. No build step. Just HTML, CSS, and the JavaScript that LOCUS generates.

The complete pipeline — OPTIC generates the scene, LOCUS makes it alive, ship it.

The marine engineer in me cares that the cable routing is right. The web developer in me cares that the interaction feels natural. The hacker in me cares that it shipped with zero dependencies.


The thirteen

Every skill started as friction. Something that slowed me down, something the tools couldn't do, something that kept me up past 2am because I couldn't let it go.

jord0.skills/
├── PORTAL    — Session context management
├── REFRAX    — Visual code comprehension
├── LOCUS     — Interactive image toolkit
├── OPTIC     — AI image generation pipeline
├── CANVAS    — Immersive web creation
├── FORGE     — Project scaffolding
├── CONCLAVE  — Multi-perspective debate
├── ECHO      — Decision capture and reasoning trails
├── MIRROR    — Counterargument generation
├── SPARK     — Creative ideation
├── RECON     — Codebase reconnaissance
├── RECALL    — Memory and knowledge retrieval
└── NOTIFY    — Cross-platform desktop notifications
Enter fullscreen mode Exit fullscreen mode

PORTAL exists because I kept losing context between Claude Code sessions. Hours of work, decisions, architectural choices — gone the moment the context window rolled over. So I built a spell system. Portals. Incantation codes. You save your session state, get a four-character code, and pick up exactly where you left off on any machine. The deep portals capture not just the task state but the tone — the emotional register, the relationship dynamic, the things that make the next session feel like a continuation instead of a cold start.

REFRAX exists because AI writes code faster than humans can read it. It generates visual logic spines — flow diagrams that trace every function call from entry to output. Decision diamonds flag branching complexity. Risk cards catch SQL injection, XSS, the OWASP top ten. Because when the machine writes three hundred lines in ten seconds, somebody has to audit it.

REFRAX — visual code comprehension. Logic spines trace every function call from entry to output. Decision diamonds flag branching complexity.

REFRAX risk cards — SQL injection detected, severity flagged, fix prompt ready to copy-paste.

LOCUS exists because I looked at a generated image and thought this should do something when I touch it.

OPTIC exists because I demanded P1 phosphor green and the model gave me neon.

Every one of them follows the Agent Skills open standard. Claude Code, Cursor, Gemini CLI, about twenty-five other tools — no vendor lock-in. A skill is a markdown file with YAML frontmatter:

---
name: my-skill
description: What this skill does (one line)
---

# Instructions

Claude reads these when the skill activates.
What to do. How to do it. What tools to use.
Enter fullscreen mode Exit fullscreen mode

No SDK. No compilation. No deployment pipeline. Write the markdown. Drop it in. Done.


Where it started

Before the skills, there was the website.

jord0.net — a CRT-themed under-construction page. Single HTML file, no frameworks. Live webcam with CSS scanline effects, numbers station audio, green phosphor oscilloscope, three hidden easter eggs.

jord0.net. Single HTML file. No React, no build step, no dependencies. Live webcam feed with CSS CRT effects — scanlines, phosphor glow, barrel distortion. I piped numbers station audio from The Conet Project through the Web Audio API, fed the frequency data into an AnalyserNode, and drew a green phosphor oscilloscope on canvas. Buried three easter eggs in the source. One swaps the room portrait for a storm cloud. One plays shortwave static through your speakers. One shows you the view from inside the television.

Built it in one session with Claude Code. The whole thing. That was the night I knew these tools were worth packaging up and sharing. Not because the AI did the work — but because my thirty years of knowing what a CRT oscilloscope actually looks like made the AI's output real instead of decorative.


What I'd build differently

I underestimated documentation. The skills work. Explaining why they're useful requires showing, not telling — which is why this article exists and why it should have existed months ago.

I built for myself first and for others second. PORTAL and REFRAX are the most broadly useful. OPTIC and LOCUS are the most impressive. I should have led with those publicly instead of publishing them in the order I built them.

And I waited too long to write about the process. The most valuable thing about this work isn't the code. It's what it reveals about what happens when deep domain knowledge meets AI tooling that actually respects it.

Full documentation at jord0-cmd.github.io/jord0.skills — install guides, skill reference, recipes, and architecture docs.


Try it

GitHub: github.com/jord0-cmd/jord0.skills
Docs: jord0-cmd.github.io/jord0.skills
The site: jord0.net

Clone the repo. Copy any skill folder into your Claude Code skills directory. It's live. No API keys, no setup, no dependencies for most skills.

If you build something with them, I want to see it. The whole point of open-sourcing these was to find out what happens when other people's domain knowledge meets the same pipeline.

The model will give you neon green.

Bring your own phosphor.


Top comments (0)