<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Igor Gridel</title>
    <description>The latest articles on DEV Community by Igor Gridel (@igorgridel).</description>
    <link>https://dev.to/igorgridel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/igorgridel"/>
    <language>en</language>
    <item>
      <title>The 10% nobody in AI design is solving</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Tue, 21 Apr 2026 04:24:56 +0000</pubDate>
      <link>https://dev.to/igorgridel/the-10-nobody-in-ai-design-is-solving-4p02</link>
      <guid>https://dev.to/igorgridel/the-10-nobody-in-ai-design-is-solving-4p02</guid>
      <description>&lt;p&gt;Figma is down 49% this year. Adobe is down 30%. Lovable crossed $200M ARR and raised a $330M Series B at a $6.6B valuation. AI app builder revenue hit $4.7B in 2026 and is projected to more than double by 2027.&lt;/p&gt;

&lt;p&gt;Surface reading: AI design tools won.&lt;/p&gt;

&lt;p&gt;The actual story: none of them have solved the last 10% of the work, and that's where the money lives.&lt;/p&gt;

&lt;p&gt;Look at the last sixty days.&lt;/p&gt;

&lt;p&gt;On March 19, Google's Stitch 2.0 launched. FIG dropped 12% in two days.&lt;/p&gt;

&lt;p&gt;On March 24, Figma opened the canvas to agents with &lt;code&gt;use_figma&lt;/code&gt;, &lt;code&gt;generate_figma_design&lt;/code&gt;, a skills framework, and nine community skills at launch. One of them was called &lt;code&gt;/sync-figma-token&lt;/code&gt;. They shipped the bridge everyone had been waiting for. The stock kept falling.&lt;/p&gt;

&lt;p&gt;On April 14, Mike Krieger (Anthropic's CPO) resigned from Figma's board.&lt;/p&gt;

&lt;p&gt;On April 17, Anthropic launched Claude Design, a prompts-to-prototypes tool built on Claude Opus 4.7 that reads your codebase and Figma files to extract your design system and apply it to new work. FIG dropped another 7%.&lt;/p&gt;

&lt;p&gt;Today is April 20.&lt;/p&gt;




&lt;p&gt;Everyone is reading this chart as "AI is eating Figma." Maybe. But if AI is eating Figma, what is AI replacing it with? Every tool in that timeline, from Stitch to Claude Design to Lovable to v0 to Figma's bolted-on bridge, has the same shape. You describe what you want, the tool generates, the screen looks 90% right.&lt;/p&gt;

&lt;p&gt;Then the real work starts. Make the button bigger. Warm up the green. Move that 8 pixels left. No, the other direction. A little less.&lt;/p&gt;

&lt;p&gt;You see it, then you narrate it. That's the friction.&lt;/p&gt;

&lt;p&gt;Visual work has always been visual. Figma grew because you &lt;em&gt;drag&lt;/em&gt;. You don't describe the corner of a rectangle to your cursor and wait for the machine to move it. Prompt-to-UI reversed this. Every edit goes through English. Every adjustment is a round trip through words. It feels like progress but structurally it's a regression.&lt;/p&gt;

&lt;p&gt;The market isn't pricing "Figma doesn't have agents." Figma has agents now. The market is pricing whether any of the current tools actually solve the part that eats hours. None of them do. They all generate, and none of them adjust.&lt;/p&gt;




&lt;h2&gt;
  
  
  I built the alternative today
&lt;/h2&gt;

&lt;p&gt;Not in some noble "the future is now" sense. I sat down at my desk, opened a blank Paper file I'd named "Happy castle" for some reason I don't remember, picked a tight brief of one upgrade card and one button, and walked it end to end through Paper's MCP.&lt;/p&gt;

&lt;p&gt;Paper is a design canvas that was built to be read and written as structured data from the beginning. Every node, every style, every layer is queryable from the outside. You don't need a Dev Mode seat. You don't need a plugin. You select a frame, and the agent reads the exact computed background color, every padding value, every border radius.&lt;/p&gt;

&lt;p&gt;A Paper file isn't an image or even a design. It's a queryable object. I can ask it for the JSX of a node. I can ask it for every layer's typography in one batch. When I change the fill of a button, the new value is available a few milliseconds later to anything that knows where to look.&lt;/p&gt;

&lt;p&gt;Here is what I wrote, in order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Paper design.&lt;/strong&gt; Bone background, moss accent, serif price, sans-serif body. One card primitive, one button primitive, both sitting on a canvas as editable nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A React + Tailwind project.&lt;/strong&gt; &lt;code&gt;src/tokens.ts&lt;/code&gt; pulling every color, font, radius, and shadow into one object. &lt;code&gt;tailwind.config.ts&lt;/code&gt; imports it. &lt;code&gt;Card.tsx&lt;/code&gt; and &lt;code&gt;Button.tsx&lt;/code&gt; consume tokens through Tailwind classes like &lt;code&gt;bg-surface&lt;/code&gt;, &lt;code&gt;rounded-card&lt;/code&gt;, &lt;code&gt;shadow-card&lt;/code&gt;. The screen composes them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A &lt;code&gt;.paper-sync/snapshot.json&lt;/code&gt; file.&lt;/strong&gt; It maps every token and every primitive style field back to a specific Paper node ID and property. Each field declares whether it's bound to a token or a local one-off.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A &lt;code&gt;/paper-sync&lt;/code&gt; skill.&lt;/strong&gt; Re-reads every tracked node, diffs against the snapshot, updates the minimum amount of code needed to mirror the change, writes a new baseline.&lt;/p&gt;

&lt;p&gt;Total build time: one session. Total lines of new code, not counting config: under 300.&lt;/p&gt;

&lt;p&gt;The bundle is free on my Patreon if you want to drop it into your own project: &lt;a href="https://www.patreon.com/posts/new-skill-paper-156143156" rel="noopener noreferrer"&gt;https://www.patreon.com/posts/new-skill-paper-156143156&lt;/a&gt;. It's the skill file, an install walkthrough, a snapshot template, and the example project as reference.&lt;/p&gt;

&lt;p&gt;Now if I want the accent warmer, I open Paper, click the button, pick a color. I run &lt;code&gt;/paper-sync&lt;/code&gt;. Claude sees the diff, updates &lt;code&gt;colors.accent&lt;/code&gt; in &lt;code&gt;tokens.ts&lt;/code&gt;, tells me in plain English what moved ("accent from #4A6B36 to #5D8B3F, a brighter moss"), and rewrites the snapshot.&lt;/p&gt;

&lt;p&gt;I didn't describe the color. I picked it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Watch the sync happening (60s Loom):&lt;/strong&gt; &lt;a href="https://www.loom.com/share/f19bbead049d4b52827664f4811f07a1" rel="noopener noreferrer"&gt;https://www.loom.com/share/f19bbead049d4b52827664f4811f07a1&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why Paper, specifically
&lt;/h2&gt;

&lt;p&gt;Three honest comparisons, since you're probably asking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Paper versus Figma, as of April 20, 2026.&lt;/strong&gt; Figma now has agent writes through &lt;code&gt;use_figma&lt;/code&gt;, but they're bolted onto a canvas that was designed for a human designer first. The Skills framework assumes you're working inside Figma's component model. You need a Dev or Full seat plus a Claude Pro or Max plan. The MCP server runs locally and authenticates through the desktop app. It works. There's just a lot of surface area between you and the design file.&lt;/p&gt;

&lt;p&gt;Paper is flatter. The design file IS the data surface. There's no translation into proprietary component conventions. &lt;code&gt;get_computed_styles&lt;/code&gt; returns CSS-shaped values. &lt;code&gt;write_html&lt;/code&gt; takes real HTML with inline styles and turns them into design nodes. &lt;code&gt;get_jsx&lt;/code&gt; returns code-ready JSX. The semantics match how you'd talk about a web UI if you were writing one from scratch, because that's what Paper is: a visual editor whose primitives are the same primitives a web developer already uses.&lt;/p&gt;

&lt;p&gt;For a solo operator or a small team, that shape is the difference between "I hack this in an afternoon" and "I set up Dev Mode, map our components, wire Claude to the desktop app, then start."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Paper versus Claude Design, as of April 20, 2026.&lt;/strong&gt; Claude Design launched three days ago, and it's the strongest of the prompt-first tools. It has a canvas alongside the chat, not just a conversation. For refinements, Claude generates purpose-built sliders for each element (color, spacing, layout) that you drag to adjust. You can click any part of the canvas and drop an inline comment requesting a targeted change. The code connection is a Claude Code handoff bundle, a structured export containing the design spec, extracted brand tokens, and component structure, which Claude Code turns into production React in about four minutes.&lt;/p&gt;

&lt;p&gt;Three limits. First, you can't grab elements on the canvas and drag them yet. Direct manipulation is on Anthropic's roadmap, roughly six months out. Second, the code connection is one-shot. You export the bundle, Claude Code generates, and that's it. If the design changes later, you export again. Third, Claude Design runs on a weekly allowance that sits on top of a Claude Pro, Max, Team, or Enterprise subscription. Every slider drag, every inline comment, every prompt is a Claude interaction that counts. Heavy visual iteration eats through the allowance, and extra usage is a purchase on top.&lt;/p&gt;

&lt;p&gt;Paper and &lt;code&gt;/paper-sync&lt;/code&gt; go the other way. You select elements directly on the canvas, change colors with a picker, resize with handles, drag where you want. The sync is ongoing, not a one-time export. Edit the color once in Paper's desktop artboard, run &lt;code&gt;/paper-sync&lt;/code&gt;, and the token file updates. The same run walks the mobile artboard and forces every mirror of that token to match, so desktop and mobile never drift. And the canvas work itself doesn't touch your Claude quota. Picking a color in Paper is just you moving the cursor. Only the periodic &lt;code&gt;/paper-sync&lt;/code&gt; calls are Claude interactions, and those are minutes apart, not seconds. Paper Pro is $20 a month for a million MCP calls a week, which is effectively unlimited for a solo operator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Paper versus just using Claude Code alone.&lt;/strong&gt; You could skip the canvas entirely. Ask Claude Code to build the project, iterate in the terminal, look at the preview, describe changes in the chat. That's the default workflow most people use today.&lt;/p&gt;

&lt;p&gt;Try it on a nuanced visual. The kind where you say "the card's a touch too loud, warm the shadow, tighten the radius by a hair." You end up writing four sentences to describe one drag. Or you take a screenshot, annotate it, paste it back, and hope the agent sees what you see. The feedback loop is slow because the input is text and the output is text-describing-visuals. The canvas is missing from the middle.&lt;/p&gt;

&lt;p&gt;Paper is that missing canvas. You see the design while you edit it. The agent sees the exact computed styles of what you're pointing at. The round trip between intent and implementation gets shorter in both directions: you adjust visually, Claude syncs structurally. Neither of you is guessing what the other means.&lt;/p&gt;




&lt;h2&gt;
  
  
  The reframe
&lt;/h2&gt;

&lt;p&gt;Everyone thinks the problem with AI design tools is that they're not good enough at generating. The real problem is what happens &lt;em&gt;after&lt;/em&gt; generating. Generation is a one-shot event. Iteration is what consumes the hours. Prompts can't adjust. They can only regenerate, and regenerating is lossy.&lt;/p&gt;

&lt;p&gt;The companies priced for AI-design-tool dominance won't be the ones with the best prompt interface. They will be the ones that figured out what happens in the twenty minutes between "looks 90% right" and "ready to ship." That gap is the market.&lt;/p&gt;

&lt;p&gt;Look at the bodies on the road.&lt;/p&gt;

&lt;p&gt;CodeParrot tried to solve this from the Figma side and shut down in July 2025. The YC-accepted pitch was Figma designs converted to frontend code using AI. The generated code wasn't reliable enough for production. Teams kept having to fix the output by hand. Builder.ai, once valued at $1.2 billion, filed for bankruptcy in May 2025 after promising anyone could build an app without writing code through its assistant Natasha. Series A-stage shutdowns jumped from 6% to 14% of all closures in 2025, a 2.5x increase over the prior year.&lt;/p&gt;

&lt;p&gt;The pattern is the same: GPT plus a prompt plus a nice UI. No moat on the adjustment loop.&lt;/p&gt;

&lt;p&gt;The moat is the adjustment loop.&lt;/p&gt;




&lt;p&gt;I'm not going to predict who wins this race. Figma is moving, Claude Design is moving, others will. A GitHub repo called &lt;code&gt;lifesized/figma-design-sync&lt;/code&gt; already does token-bound design-code sync on Figma's side. The shape is converging from multiple directions, which tells you it's real.&lt;/p&gt;

&lt;p&gt;What I will say: the winner won't be whoever shipped the MCP first or built the best prompt. It'll be whoever treats the design file as native structured data, not as a canvas with an agent layer on top or a chat with design-system awareness. That's a different architecture question, and the incumbents carry the bigger backlog.&lt;/p&gt;

&lt;p&gt;Paper doesn't have the users Figma has. It doesn't have the enterprise contracts. It has something else: a design file that was built to talk to agents from day one. That shape is cheaper to build a sync loop on top of than retrofitting years of proprietary canvas format or wrapping the generation step in a closed product.&lt;/p&gt;

&lt;p&gt;The right shape is cheap to build once you see it.&lt;/p&gt;

&lt;p&gt;I built a version of it in an afternoon.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>design</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Spent Weeks Blaming Claude Code. The Problem Was My Pipeline.</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Fri, 17 Apr 2026 13:28:58 +0000</pubDate>
      <link>https://dev.to/igorgridel/i-spent-weeks-blaming-claude-code-the-problem-was-my-pipeline-2g7a</link>
      <guid>https://dev.to/igorgridel/i-spent-weeks-blaming-claude-code-the-problem-was-my-pipeline-2g7a</guid>
      <description>&lt;p&gt;For weeks I tried to design a landing page for Scopefull using only AI. I do not enjoy designing from scratch. I know what I like when I see it, but opening Figma and building something from nothing is a different skill than spotting when something is off, and I do not have the first one. Handing it to a coding agent was the obvious move.&lt;/p&gt;

&lt;p&gt;The first attempt came back from Claude Code. A clean directory layout with the headline "The creative AI tools worth your money and what the cheapest way in actually costs." There was an Editor's Picks sidebar on the right with three cards labelled Freepik, ComfyUI, Kling AI. Category tabs underneath. A sort-by-best-value dropdown. Thirty-three tools in a table below the fold.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdata.postforme.dev%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpost-media%2Fproj_VCzfTZfRBf8haxsyL2gcG%2F97b958e3bc8d41038d7145bc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdata.postforme.dev%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpost-media%2Fproj_VCzfTZfRBf8haxsyL2gcG%2F97b958e3bc8d41038d7145bc" alt="The first attempt. Clean. Fine. Nothing wrong with it." width="1200" height="667"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It was not bad. It was also exactly what a thousand other AI tool directories already look like. The H1 hedged with "what the cheapest way in actually costs," which is a sentence nobody has ever typed into a search bar. It was a competent directory layout and a dead landing page.&lt;/p&gt;

&lt;p&gt;I moved to Codex. Similar result. Then Google AI Studio, same pattern. I tried Claude Code with Paper Canvas MCP, which is supposed to be the more design-focused one, and got a version that looked different in the specific way a tool trying hard to look different always looks different.&lt;/p&gt;

&lt;p&gt;At that point I had four agents, four tabs open, and zero landing pages that answered the question my users were actually arriving with. My first instinct was to blame the models. Design must be the thing these agents cannot do yet. I should wait for the next release.&lt;/p&gt;

&lt;p&gt;That was the wrong story.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual problem
&lt;/h2&gt;

&lt;p&gt;The coding agents were not bad at design. I was asking them to do four jobs at once.&lt;/p&gt;

&lt;p&gt;Every one of those prompts was really asking a single tool to understand the product, produce a spec, pick a visual direction, and write the code, all in one pass. Models do not get sharper when you stack jobs onto them. They average. That is why every landing page came back looking like the statistical mean of every landing page the model had seen. It was not a design. It was a midpoint.&lt;/p&gt;

&lt;p&gt;The fix was to stop treating "build the landing page" as one request. It is three stages, and each stage wants a different tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage one. Specs in Kiro.
&lt;/h2&gt;

&lt;p&gt;I opened Kiro, described Scopefull, and let it generate the three documents it is good at: &lt;code&gt;requirements.md&lt;/code&gt;, &lt;code&gt;design.md&lt;/code&gt;, &lt;code&gt;tasks.md&lt;/code&gt;. What came back was the best spec I have ever had for a personal project. Structured, specific, written like a product manager actually thought about the thing before the keyboard came out.&lt;/p&gt;

&lt;p&gt;Then I tried to let Kiro do the coding too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdata.postforme.dev%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpost-media%2Fproj_VCzfTZfRBf8haxsyL2gcG%2F4716795418aab5a8520aed89" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdata.postforme.dev%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpost-media%2Fproj_VCzfTZfRBf8haxsyL2gcG%2F4716795418aab5a8520aed89" alt="Kiro attempting to edit page.tsx. 79 edits queued. Every single one cancelled." width="1200" height="1324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What you are looking at is Kiro trying to edit one file, &lt;code&gt;page.tsx&lt;/code&gt;, seventy-nine times in a row. Every attempt cancelled. "Error(s) while reading file(s)" sits across the top with ten filenames it never actually read. I have no idea what it was trying to do. It did not finish any of it.&lt;/p&gt;

&lt;p&gt;I will keep using Kiro for docs and never touch its coding side again. That is a tool used for what it is actually good at, not what the homepage promises.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage two. Mockups before code.
&lt;/h2&gt;

&lt;p&gt;I took the Kiro docs into Claude Code. Before a single line of Code, I asked Claude for a few design directions described in prose. Not components, not code, just different visual takes on the site.&lt;/p&gt;

&lt;p&gt;For each direction I used NanoBanana Pro to generate an actual image of what the landing page would look like. Real hero layout, real spacing, real type treatment. I picked the one that felt right and only then moved to code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdata.postforme.dev%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpost-media%2Fproj_VCzfTZfRBf8haxsyL2gcG%2Fada22212830902846e37da52" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdata.postforme.dev%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpost-media%2Fproj_VCzfTZfRBf8haxsyL2gcG%2Fada22212830902846e37da52" alt="The mockup that won. Question first, answer second." width="1200" height="746"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The mockup did one thing no directory design can do. It asked the question the visitor was already asking in their head. "What's the cheapest way to generate 500 Flux 2 Pro images at 4K this month?" Form underneath: model, quantity, resolution, access type. Giant answer in the orange BEST VALUE card. $55.00 a month, fal.ai, pay-per-use, $0.11 per 4K image times 500. Ranked alternatives below it.&lt;/p&gt;

&lt;p&gt;Still not ideal, but that's a good starting point.&lt;/p&gt;

&lt;p&gt;Before I was doing this, I had been asking one tool to both invent the design and implement it, which meant the design always came out as the statistical mean. Generating the mockup as an image first forces the decision to happen in a medium where I can actually see it and react. When I finally go to Claude Code, I am not deciding and implementing at the same time. I am implementing one specific direction I already picked.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage three. One feature at a time.
&lt;/h2&gt;

&lt;p&gt;The boring stage that matters most. Instead of asking Claude for the entire landing page in one shot, I picked one feature, built it, viewed it in the browser, fixed what was off, and committed. Then I moved to the next one.&lt;/p&gt;

&lt;p&gt;Every time I had tried the one-shot approach, the agent got seventy percent of it right and thirty percent slightly wrong in ways that compounded. By the end of the session I had a landing page I then had to argue with for two hours to unwind. One feature at a time removes the compounding. Each piece stands on its own, each piece gets attention, and if something is off I fix it now, while I still remember why it was off.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdata.postforme.dev%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpost-media%2Fproj_VCzfTZfRBf8haxsyL2gcG%2F8ab89f1a060a4233ef1fe048" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdata.postforme.dev%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpost-media%2Fproj_VCzfTZfRBf8haxsyL2gcG%2F8ab89f1a060a4233ef1fe048" alt="What shipped. Same structure as the mockup, built one feature at a time." width="1200" height="666"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The latest not final version keeps the mockup's question-first spine and swaps in a real query that fits the MVP. "What's the cheapest way to generate AI images, video, and music?" Pick a model. Enter your monthly volume. See the cheapest match, including the effective per-image rate and whether the "unlimited" plan actually holds at your quantity. The first query example baked in is 500 images of Google Nano Banana 2, and the answer is Freepik Premium+ at $39 a month, or $24.58 if billed annually, at $0.08 per image.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Scopefull actually does
&lt;/h2&gt;

&lt;p&gt;Scopefull answers one question for people buying creative AI tools: what is the cheapest way to do this specific thing right now.&lt;/p&gt;

&lt;p&gt;The reason it needs to exist is that pricing pages for creative AI tools are genuinely one of the worst categories of web in 2026. Hidden credits, fake unlimited tiers that throttle above a quantity nobody tells you, tokens that are not actually tokens, the same model priced three different ways across three resellers. I have been testing every one of them by hand and writing the math down per image, per resolution, per model, per month. That math is the actual product.&lt;/p&gt;

&lt;p&gt;The landing page was the front door. I was stuck on the front door for weeks. It shipped once I stopped treating it as one problem and started treating it as three.&lt;/p&gt;

&lt;p&gt;The models were fine the whole time. I was running them wrong.&lt;/p&gt;

&lt;p&gt;P.S. Scopefull still work in progress.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>webdev</category>
      <category>claude</category>
    </item>
    <item>
      <title>How to Set Up a Free Coding Agent on Your Machine in 10 Minutes</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Wed, 15 Apr 2026 17:16:10 +0000</pubDate>
      <link>https://dev.to/igorgridel/how-to-set-up-a-free-coding-agent-on-your-machine-in-10-minutes-4095</link>
      <guid>https://dev.to/igorgridel/how-to-set-up-a-free-coding-agent-on-your-machine-in-10-minutes-4095</guid>
      <description>&lt;p&gt;I pay for Claude Code. I use it every day, I built skills for it, and I think it's worth the money. I'm saying this upfront because this post is going to show you how to get a coding agent for free, and I don't want you wondering whether I actually believe in the paid version. I do. But not everybody needs it, not everybody can afford it, and not everybody wants their codebase sent to a server they don't control.&lt;/p&gt;

&lt;p&gt;If that's you, this is the setup.&lt;/p&gt;

&lt;p&gt;Three pieces of software, all free, all open source. You install them, connect them, and ten minutes later you have a coding agent running in your terminal that can read your files, write code, run commands, and help you build things. Your files stay on your machine. No API key. No credit card. No trial that expires in fourteen days.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you're actually installing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ollama&lt;/strong&gt; runs AI models locally. Think of it as a model server sitting on your laptop. You pull a model the same way you'd pull a Docker image, and it handles all the inference. Free, open source, one command to install.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemma 4&lt;/strong&gt; is the model. Google DeepMind released it on April 2, 2026 under Apache 2.0, which means you can use it for anything, commercially or personally, no restrictions. The 26B parameter variant uses a Mixture of Experts architecture that only activates 3.8 billion parameters per inference. That means a 26 billion parameter model runs with the memory footprint of a much smaller one. It scores 77.1% on LiveCodeBench v6 (competitive coding) and 82.3% on GPQA Diamond (graduate-level science questions). For a free local model, those numbers are absurd.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenCode&lt;/strong&gt; is the agent. Open source, 140,000+ stars on GitHub, built by the anomaly.co team. It's a terminal-based coding agent that connects to whatever AI backend you point it at. Claude, GPT, Gemini, or in our case, a local Ollama server running Gemma 4. It reads your project files, suggests edits, runs commands. The full agent experience, just powered by a model running on your own hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Install Ollama
&lt;/h2&gt;

&lt;p&gt;Go to &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;ollama.com/download&lt;/a&gt; and grab the installer for your OS. Mac, Windows, Linux, all supported.&lt;/p&gt;

&lt;p&gt;On Mac or Linux, you can also run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, Ollama runs as a background service. You can verify it's working with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Pull Gemma 4
&lt;/h2&gt;

&lt;p&gt;This is where you choose your model size. Two realistic options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you have 24GB+ RAM&lt;/strong&gt; (most desktops, some high-end laptops):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull gemma4:26b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the 26B MoE variant. The best balance of capability and hardware requirements. It activates only 3.8B parameters per inference, so it runs faster than you'd expect from a 26 billion parameter model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you have 8GB RAM or less&lt;/strong&gt; (older laptops, budget machines):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull gemma4:e4b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the E4B variant, 4 billion parameters. It won't be as capable, but it'll run smoothly on almost anything and still handle basic coding tasks, file operations, and simple refactors.&lt;/p&gt;

&lt;p&gt;The download will take a few minutes depending on your connection. The 26B model is around 18GB, the E4B is around 10GB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Fix the context window (do not skip this)
&lt;/h2&gt;

&lt;p&gt;This is the gotcha that wastes people's time. Ollama defaults every model to a 4,096 token context window. Gemma 4 supports 128K tokens on the E2B and E4B variants, 256K on the 26B and 31B, but Ollama doesn't care. It gives you 4K unless you explicitly tell it otherwise.&lt;/p&gt;

&lt;p&gt;4K tokens is roughly one medium-sized file. For a coding agent that needs to read your project structure, understand multiple files, and maintain conversation context, 4K is useless. You'll get responses that cut off mid-thought, forget what you asked three messages ago, or just fail silently because the model ran out of room.&lt;/p&gt;

&lt;p&gt;Create a file called &lt;code&gt;Modelfile&lt;/code&gt; (no extension) in any directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; gemma4:26b&lt;/span&gt;
PARAMETER num_ctx 32768
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you pulled the E4B instead, use &lt;code&gt;FROM gemma4:e4b&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then create the custom model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama create gemma4-agent &lt;span class="nt"&gt;-f&lt;/span&gt; Modelfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have a model called &lt;code&gt;gemma4-agent&lt;/code&gt; with a 32K context window. On machines with 24GB+ RAM you can push this to 65536 or even 131072, but 32K is the sweet spot where you get enough context for real agent work without crushing your memory.&lt;/p&gt;

&lt;p&gt;Test it works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run gemma4-agent &lt;span class="s2"&gt;"What is your context window size?"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Install OpenCode
&lt;/h2&gt;

&lt;p&gt;Check &lt;a href="https://opencode.ai" rel="noopener noreferrer"&gt;opencode.ai&lt;/a&gt; for the latest install method. As of April 2026:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mac/Linux (recommended):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://opencode.ai/install | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;npm (any platform):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; opencode-ai@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Windows (via Scoop):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scoop &lt;span class="nb"&gt;install &lt;/span&gt;opencode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Mac (via Homebrew):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;anomalyco/tap/opencode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;opencode &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Point OpenCode at your local model
&lt;/h2&gt;

&lt;p&gt;OpenCode needs to know where your model lives. Create or edit the config file at &lt;code&gt;~/.config/opencode/opencode.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ollama"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"npm"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@ai-sdk/openai-compatible"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"baseURL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:11434/v1"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"models"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"gemma4-agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Gemma 4 Agent (local)"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The baseURL points to Ollama's local API. OpenCode talks to it using the OpenAI-compatible protocol, which Ollama supports out of the box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Use it
&lt;/h2&gt;

&lt;p&gt;Navigate to any project directory and launch OpenCode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;your-project
opencode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On first run, select the Ollama provider and the gemma4-agent model. Then just talk to it.&lt;/p&gt;

&lt;p&gt;Try something simple first:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Read the files in this directory and tell me what this project does."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then try something practical:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Find all PNG images in this project and list their file sizes."&lt;/p&gt;

&lt;p&gt;"Write a bash script that converts all PNG files to WebP format."&lt;/p&gt;

&lt;p&gt;"Look at my package.json and tell me which dependencies are outdated."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you're getting coherent, useful responses, it's working. The model is running entirely on your hardware, your files never leave your machine, and you paid nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you'll notice (honest take)
&lt;/h2&gt;

&lt;p&gt;I'm not going to pretend this is equivalent to Claude Code or Cursor with Claude 4 behind it. It isn't. A local model with 3.8 billion active parameters is not going to match a frontier model with orders of magnitude more compute. You will notice the difference on complex multi-file refactors, on subtle architectural decisions, on tasks that require holding a lot of context at once.&lt;/p&gt;

&lt;p&gt;But for a huge amount of daily coding work, it's genuinely good. File operations, simple scripts, refactoring single files, generating boilerplate, explaining code, converting formats. The stuff that takes you five minutes of tedious typing but doesn't require deep reasoning. Gemma 4 handles that well.&lt;/p&gt;

&lt;p&gt;And for anyone who cares about privacy, there is no alternative that matches this. Your code, your files, your conversations, all of it stays on your machine. No server. No logs. No terms of service that might change next quarter.&lt;/p&gt;

&lt;h2&gt;
  
  
  The skills angle
&lt;/h2&gt;

&lt;p&gt;I built a set of utility skills that work with OpenCode, Claude Code, Codex, all of them. They're on my Patreon. But honestly, with the setup above and the commands I share in my other posts, you can do most of it for free. The skills save you time. The setup in this post saves you money. Pick whichever matters more to you right now.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opencode</category>
      <category>ollama</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I Built a Utility Brain for Coding Agents</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Mon, 13 Apr 2026 05:35:39 +0000</pubDate>
      <link>https://dev.to/igorgridel/i-built-a-utility-brain-for-coding-agents-2iim</link>
      <guid>https://dev.to/igorgridel/i-built-a-utility-brain-for-coding-agents-2iim</guid>
      <description>&lt;p&gt;Last Tuesday I asked Claude Code to compress a batch of PDFs for email. It spent two minutes researching ghostscript flags, tried three wrong combinations, then produced files that were actually larger than the originals.&lt;/p&gt;

&lt;p&gt;I had solved this exact problem two weeks earlier. Same agent, same tool, same flags. But Claude Code does not remember what worked. Every session starts from zero. It researches the same ghostscript documentation, stumbles through the same wrong defaults, and arrives at the same answer it already found before, if I am lucky. Sometimes it finds a worse one.&lt;/p&gt;

&lt;p&gt;This is not a Claude Code problem. It is a structural problem with how coding agents work right now. They have enormous intelligence and zero muscle memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap nobody talks about
&lt;/h2&gt;

&lt;p&gt;Coding agents are genuinely good at hard problems. Architecture decisions, complex refactors, debugging race conditions, writing parsers. The stuff that requires reasoning. But ask one to convert a PNG to WebP at 85% quality without overwriting the original, and it will spend thirty seconds reading the ImageMagick docs as if it has never seen them before. It has seen them. It just cannot remember that it has.&lt;/p&gt;

&lt;p&gt;The same thing happens with ffmpeg flags for video compression, qpdf for PDF manipulation, pngquant for image optimization. Every session, the agent treats these as novel problems that require research. They are not novel. They are the same ten operations repeated hundreds of times, and the correct approach for each one was established years ago by people who actually use these tools daily.&lt;/p&gt;

&lt;p&gt;I kept watching this happen. Not on hard problems. On the trivial ones. The agent would nail a complex database migration and then choke on "make this JPEG smaller." It was like working with a brilliant colleague who could not remember where the coffee machine was.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually built
&lt;/h2&gt;

&lt;p&gt;I spent a week building something I started calling a utility brain. Six skill files, written in plain markdown, that give a coding agent reliable defaults for the operations it keeps researching from scratch.&lt;/p&gt;

&lt;p&gt;The six skills: PDF, Image, Video, Audio, File Ops, and Automation.&lt;/p&gt;

&lt;p&gt;Each one is a single SKILL.md file. No scripts, no binaries, no dependencies to install. Just markdown with YAML frontmatter that the agent reads and follows. The total pack is about 54KB. For context, that is smaller than most README files.&lt;/p&gt;

&lt;p&gt;Here is what surprised me while building it. The skills are not lists of commands. They have a four-layer architecture that I did not plan but emerged from the problems I kept hitting:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Router.&lt;/strong&gt; The agent reads what you asked for and figures out which operation you need. "Make this smaller for email" routes to lossy compression with specific quality targets. "Archive these" routes to something different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domain logic.&lt;/strong&gt; This is where the actual knowledge lives. Quality presets, format-specific rules, the things that a developer who works with ghostscript every day just knows. The defaults are not generic. They are the specific values that produce good results for specific use cases. Ebook PDF compression uses different settings than print-ready compression, and both are different from "I need to email this."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Execution.&lt;/strong&gt; The agent detects which tools are installed on your machine, picks the best available one, and falls back to alternatives if the first choice is missing. If ghostscript is not installed, it tells you how to install it. It does not silently fail or try a worse approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output contract.&lt;/strong&gt; Safety rules. Originals are never overwritten. Lossy operations are announced before they run. Destructive operations do a dry-run first and show you what will happen. I added these after the agent helpfully deleted my source files during an early test. Twice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part that changes how you think about agents
&lt;/h2&gt;

&lt;p&gt;The Automation skill has something I have not seen anyone else build. Eight named presets for common batch workflows:&lt;/p&gt;

&lt;p&gt;prepare-for-email, blog-asset-pack, social-video-bundle, photo-delivery-pack, screenshot-doc-pack, asset-cleanup, pdf-archive-pack, publish-ready-images.&lt;/p&gt;

&lt;p&gt;You say "prepare these for email" and the agent knows that means compress PDFs under 10MB, convert images to WebP, strip metadata, and organize the output. You do not specify any of that. The preset knows.&lt;/p&gt;

&lt;p&gt;This is the reframe that took me a while to see. We talk about coding agents like they are junior developers who need better prompts. They are not. They are closer to a brilliant contractor who shows up to a new job site every morning with no memory of yesterday. The problem is not intelligence. The problem is that they have no defaults for common work. No muscle memory.&lt;/p&gt;

&lt;p&gt;Skills are muscle memory. The agent does not need to be smart about ghostscript flags. It needs to remember that &lt;code&gt;-dPDFSETTINGS=/ebook&lt;/code&gt; is the right call for email attachments and that the quality tradeoff is acceptable. That is not reasoning. That is experience, compressed into 315 lines of markdown.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why markdown
&lt;/h2&gt;

&lt;p&gt;I tried writing these as shell scripts first. It was a mistake for three reasons.&lt;/p&gt;

&lt;p&gt;Shell scripts are brittle. They assume a specific OS, specific tool versions, specific directory structure. The moment someone runs them on Windows instead of Mac, half the commands break.&lt;/p&gt;

&lt;p&gt;Shell scripts are opaque. The agent executes them but does not understand what they do. If something fails, it cannot adapt because the logic is hidden inside a bash file it is just running.&lt;/p&gt;

&lt;p&gt;Markdown skills are transparent. The agent reads the logic, understands the intent, and can adapt when something unexpected happens. If ImageMagick is not installed but ffmpeg is, the Image skill's fallback chain handles it. A shell script would just fail.&lt;/p&gt;

&lt;p&gt;The line counts tell the story. PDF: 315 lines. Image: 273. Video: 318. Audio: 374. File Ops: 374. Automation: 299. That is the entire utility brain, under 2,000 lines total. All of it human-readable, all of it editable with a text editor, no build step, no compilation, nothing to run.&lt;/p&gt;

&lt;p&gt;And because they are plain markdown, they work everywhere. Claude Code, Codex, OpenCode, Gemini CLI, Cursor. Any agent that can read a file can use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually use it for
&lt;/h2&gt;

&lt;p&gt;I run a Patreon where I sell ComfyUI workflows for visual content creation. That means I handle a lot of image conversion, PDF packaging, video compression, and file organization. Every week.&lt;/p&gt;

&lt;p&gt;Before the utility brain, each of those operations involved the agent researching the same tools it had researched last week. After, I say "prepare this batch for Patreon delivery" and the Automation skill's photo-delivery-pack preset handles the entire pipeline. Same quality every time. No research loop. No wrong flags.&lt;/p&gt;

&lt;p&gt;The time I saved is not dramatic, maybe fifteen minutes a day. But the consistency changed everything. I stopped checking the agent's work on routine operations because the output contract guarantees originals are preserved and lossy decisions are announced. I can trust the boring stuff and focus on the parts where the agent's intelligence actually matters.&lt;/p&gt;

&lt;p&gt;That is what utility skills do. They move the floor up. The agent still thinks about hard problems. It just stops pretending that "convert PNG to WebP" is a hard problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing I would tell someone building their own
&lt;/h2&gt;

&lt;p&gt;Start with the operations you repeat. Not the clever ones. The boring ones you keep explaining to the agent like it is hearing them for the first time, because it is.&lt;/p&gt;

&lt;p&gt;Write them as markdown, not as scripts. Let the agent read your intent, not just your commands. Include fallback chains for tools because not every machine has the same setup. And add safety rules early, before the agent teaches you why you need them by deleting something important.&lt;/p&gt;

&lt;p&gt;The four-layer pattern (router, domain logic, execution, output contract) was not something I designed upfront. It is what emerged when I kept fixing the same categories of failure. The agent would misroute a request. Fixed that with clearer intent detection. It would use wrong defaults. Fixed that with domain-specific presets. It would fail silently when a tool was missing. Fixed that with fallback chains. It would overwrite originals. Fixed that with output contracts.&lt;/p&gt;

&lt;p&gt;Each layer exists because of a specific thing that went wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to get it
&lt;/h2&gt;

&lt;p&gt;The utility skills pack is on my Patreon, alongside the ComfyUI workflows and other tools I build. Or build your own. The architecture is all here, and the tools it wraps (ghostscript, ffmpeg, ImageMagick, qpdf, cwebp, pngquant) are all free and open source. The value is not in the tools. It is in knowing which flags to use and when, which is exactly the thing agents keep forgetting.&lt;/p&gt;

</description>
      <category>coding</category>
      <category>ai</category>
      <category>productivity</category>
      <category>automation</category>
    </item>
    <item>
      <title>You Don't Need a Free PDF Compressor Website Anymore</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:24:52 +0000</pubDate>
      <link>https://dev.to/igorgridel/you-dont-need-a-free-pdf-compressor-website-anymore-1o8j</link>
      <guid>https://dev.to/igorgridel/you-dont-need-a-free-pdf-compressor-website-anymore-1o8j</guid>
      <description>&lt;p&gt;There is a specific feeling that comes with uploading a PDF to a free compressor website. You land on a page plastered with ads, you drag your file into a dotted rectangle, a progress bar crawls, and somewhere between "processing" and "download your file," you remember what's in that document. A client contract. A tax return. Your kid's medical records.&lt;/p&gt;

&lt;p&gt;You just sent that to a server you know nothing about.&lt;/p&gt;

&lt;p&gt;I kept doing it for years. Everybody does. You google "free PDF compressor," you pick whatever ranks first, you upload, you download, you move on with your day. The file gets smaller, the task gets done, and you try not to think about the fact that ilovepdf.com just processed your entire document on their infrastructure. They say they delete it within two hours. Maybe they do.&lt;/p&gt;

&lt;p&gt;But here's what changed for me. I use a coding agent every day. Claude Code, specifically. And at some point I realized that compressing a PDF is just a shell command. The agent already knows how to run shell commands. So instead of opening a browser and uploading my file to a stranger's server, I type one sentence:&lt;/p&gt;

&lt;p&gt;"Compress this PDF for email."&lt;/p&gt;

&lt;p&gt;The agent figures out the rest. It runs Ghostscript locally, picks the right quality preset, compresses the file, and tells me the result. The original file stays untouched. Nothing leaves my machine.&lt;/p&gt;

&lt;p&gt;That's it. That's the whole post, almost. But let me show you what's actually happening under the hood, because it's simpler than you'd expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  The command
&lt;/h2&gt;

&lt;p&gt;Here is the exact Ghostscript command that does the work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gs &lt;span class="nt"&gt;-sDEVICE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;pdfwrite &lt;span class="nt"&gt;-dCompatibilityLevel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.4 &lt;span class="nt"&gt;-dPDFSETTINGS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/ebook &lt;span class="nt"&gt;-dNOPAUSE&lt;/span&gt; &lt;span class="nt"&gt;-dBATCH&lt;/span&gt; &lt;span class="nt"&gt;-dQUIET&lt;/span&gt; &lt;span class="nt"&gt;-sOutputFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"report-compressed.pdf"&lt;/span&gt; &lt;span class="s2"&gt;"report.pdf"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Windows, the command is &lt;code&gt;gswin64c&lt;/code&gt; instead of &lt;code&gt;gs&lt;/code&gt;. A coding agent handles this automatically, but if you are running the command manually, use the right one for your system.&lt;/p&gt;

&lt;p&gt;That &lt;code&gt;/ebook&lt;/code&gt; preset is the one I use most. It targets 150 DPI, which is plenty for anything you're emailing or sharing on screen. A 12MB PDF typically drops to around 2MB. Over 80% smaller.&lt;/p&gt;

&lt;p&gt;There are four presets, and they do what the names suggest:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/screen&lt;/code&gt; is 72 DPI. Smallest possible file. Fine for on-screen viewing, rough for print.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/ebook&lt;/code&gt; is 150 DPI. This is the sweet spot for most things. Email attachments, shared docs, anything that doesn't get printed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/printer&lt;/code&gt; is 300 DPI. High quality. Use this when someone is actually going to print the document.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/prepress&lt;/code&gt; is 300 DPI with color preservation. Archival grade. You probably don't need this unless you're in publishing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The original file is never overwritten. You always get a new file with whatever name you specify in the output flag. If you mess up the preset choice, just run it again with a different one.&lt;/p&gt;

&lt;h2&gt;
  
  
  You don't need to remember any of this
&lt;/h2&gt;

&lt;p&gt;This is the part that matters more than the command itself.&lt;/p&gt;

&lt;p&gt;If you're using a coding agent, whether that's Claude Code, Codex, OpenCode, or anything else that runs shell commands, you don't need to memorize flags. You don't need to bookmark this page. You just talk to the agent.&lt;/p&gt;

&lt;p&gt;"Compress report.pdf for email."&lt;/p&gt;

&lt;p&gt;"Make this PDF smaller, keep it readable."&lt;/p&gt;

&lt;p&gt;"Shrink all the PDFs in this folder, printer quality."&lt;/p&gt;

&lt;p&gt;The agent knows what Ghostscript is. It knows the presets. It picks the right one based on what you said. If Ghostscript isn't installed yet, it can tell you how to install it (or just install it for you, depending on your setup).&lt;/p&gt;

&lt;p&gt;The point is that you stop being the person who translates intent into commands. The agent handles that translation. You stay at the level of what you actually want.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is better than the website
&lt;/h2&gt;

&lt;p&gt;It's not just faster, though it is faster. The real difference is where your file goes.&lt;/p&gt;

&lt;p&gt;When you use ilovepdf.com or smallpdf.com or any of the dozen sites that rank for "free pdf compressor," your document travels to their servers. The compression happens on their infrastructure. They claim GDPR compliance and encrypted transfers and automatic deletion. Fine. I'm sure most of them are legitimate operations run by reasonable people.&lt;/p&gt;

&lt;p&gt;But think about what people actually upload to these sites. Tax documents. Contracts with client names and dollar amounts. Medical records. NDAs. Insurance paperwork. The kind of stuff you'd never email to a stranger, but you'll casually upload to a website you found thirty seconds ago because it had a friendly blue interface.&lt;/p&gt;

&lt;p&gt;With a coding agent running Ghostscript, the file never leaves your machine. The compression happens in a local process. There is no upload, no server, no two-hour deletion policy to trust. Your document stays on your disk, where it was before, just smaller.&lt;/p&gt;

&lt;p&gt;That is not a convenience improvement. It is a privacy improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Ghostscript
&lt;/h2&gt;

&lt;p&gt;Ghostscript is free, open source, and has been around for decades. It runs on everything. Installation takes one command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;macOS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;ghostscript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Linux (Debian/Ubuntu):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;ghostscript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Windows:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;winget &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;--id&lt;/span&gt; ArtifexSoftware.GhostScript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you already have a coding agent set up, ask it to install Ghostscript for you. It can handle that too.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually built
&lt;/h2&gt;

&lt;p&gt;I kept running into the same pattern. Not just with PDFs, but with image conversion, video cropping, file cleanup, batch renaming. These are all tasks where I'd either open a heavy application or go to some random website, when the reality is that they're all just shell commands underneath.&lt;/p&gt;

&lt;p&gt;So I built a set of skills for coding agents. Six of them: PDF operations, image conversion, video processing, audio normalization, file operations, and automation workflows. They're not code. They're structured instructions that tell the agent how to handle these tasks with good defaults, safety checks, and the right flags. The kind of thing where you say "compress this" and it just works, no googling, no flag hunting, no uploading to strangers.&lt;/p&gt;

&lt;p&gt;I packaged them into a utility skills pack on my Patreon alongside the ComfyUI workflows I sell there. Or you can just use the Ghostscript command from this post. It works fine on its own. The skill just saves you from having to remember the flags every time.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>pdf</category>
      <category>privacy</category>
    </item>
    <item>
      <title>My AI Employee Org Chart (With Real Costs)</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Fri, 10 Apr 2026 10:08:34 +0000</pubDate>
      <link>https://dev.to/igorgridel/my-ai-employee-org-chart-with-real-costs-2mak</link>
      <guid>https://dev.to/igorgridel/my-ai-employee-org-chart-with-real-costs-2mak</guid>
      <description>&lt;h1&gt;
  
  
  My AI Employee Org Chart (With Real Costs)
&lt;/h1&gt;

&lt;p&gt;My monthly payroll is $180. That covers strategy, daily operations, image generation, video prototyping, content publishing across six platforms, cloud storage, and three automated agents that check my business every few hours. No employees. No contractors.&lt;/p&gt;

&lt;p&gt;Everyone posts their AI stack. Nobody posts the failure modes.&lt;/p&gt;

&lt;p&gt;So here's my actual org chart. Every tool, the real cost, and where it breaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The brain: Claude Code + Obsidian ($100/month)
&lt;/h2&gt;

&lt;p&gt;Claude is the core of everything I do. Not in a motivational sense. In a "my entire business runs through it" sense.&lt;/p&gt;

&lt;p&gt;I use Claude Code connected to my Obsidian vault, which holds everything: drafts, voice rules, posting logs, audience research, offer details, decision history. I've built twelve custom skills on top of this. One turns a raw idea into post options. Another processes voice transcripts and routes them to the right place. Another runs my weekly review.&lt;/p&gt;

&lt;p&gt;Three automated agents run on timers. One checks my desk every four hours for stale drafts and missed tasks. One gives me a morning briefing. One runs a deep strategy review every Saturday. None of them post anything automatically. They suggest, I decide.&lt;/p&gt;

&lt;p&gt;Where it breaks: honestly, not much right now. Anthropic fixed a usage issue last week and I almost never hit the limit on the $100 subscription anymore. The real risk isn't cost. It's dependency. My entire workflow lives inside one company's product.&lt;/p&gt;

&lt;h2&gt;
  
  
  The daily workhorse: Clawdbot + MiniMax ($10/month)
&lt;/h2&gt;

&lt;p&gt;For everyday task work, Claude is overkill. Clawdbot running on MiniMax M2.7 handles about 97% of routine tasks at $0.30 per million input tokens. At $10 a month, it's cheap enough that I never think about cost when I use it.&lt;/p&gt;

&lt;p&gt;Where it breaks: MiniMax is simple. It's not a reasoning model and it doesn't pretend to be. When the task gets too complex or the context fills up, it just needs a new chat with fresh context. That's it. For $10 a month, I don't expect it to think hard. I expect it to execute reliably on straightforward tasks, and it does.&lt;/p&gt;

&lt;h2&gt;
  
  
  The creative department: Freepik Premium+ ($20/month)
&lt;/h2&gt;

&lt;p&gt;This is where the math gets interesting. Freepik Premium+ gives me NanoBanana Pro, which is Google's Gemini 3 Pro Image under a different name, and Kling video models. About $20 a month for both.&lt;/p&gt;

&lt;p&gt;In my first month testing it, I generated over 2,000 images and 100 videos. Cost per image: roughly one cent. For comparison, the same volume through the NanoBanana API directly would cost about $270 at their standard rate of $0.134 per image.&lt;/p&gt;

&lt;p&gt;For video, the workflow is even better. Kling 2.5 runs unlimited at 720p on Freepik. I prototype everything there first. When a test costs nothing, you try compositions you'd never risk at full price. It works well for camera movement and animation elements. What it can't do is voice or realistic human expressions, so forget about generating scenes where people actually talk or show emotion. When a composition works, I run one final generation on Kling 3.0.&lt;/p&gt;

&lt;p&gt;Where it breaks: after about 1,000 fast generations in a month, the queue slows down noticeably. 4K isn't included in the unlimited tier. And out of the 39 image models on the platform, NanoBanana Pro is the reason to subscribe. Most of the others are filler.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production: ComfyUI ($0 + your own hardware)
&lt;/h2&gt;

&lt;p&gt;ComfyUI is free and open source. I build custom image generation workflows in it for client work and my own content production.&lt;/p&gt;

&lt;p&gt;I built an MCP server with skills and a self-improving system for ComfyUI, so Claude can trigger and refine workflows through natural language instead of me opening a browser window. More on that in a future post.&lt;/p&gt;

&lt;p&gt;Where it breaks: ComfyUI has a weekend learning curve before it saves you anything. It's a node-based system, not a prompt box. Nodes get deprecated, dependencies break when models update, and every few weeks something that was working just stops.&lt;/p&gt;

&lt;p&gt;Before I built the skills and the self-improving system, researching new workflows meant asking ChatGPT and then re-verifying most of what it told me because it would hallucinate with complete confidence. Claude does the same thing sometimes, but less often. The MCP server and the feedback loop made the real difference. I went from manually checking everything to mostly trusting what the system suggests. That's what actually made ComfyUI usable as a daily production tool, not the node editor itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The second opinion: ChatGPT ($20/month)
&lt;/h2&gt;

&lt;p&gt;I pay for a ChatGPT subscription when I need a different point of view. Claude is my primary, but sometimes you want another model to push back on your thinking or catch something Claude missed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rest of the bench
&lt;/h2&gt;

&lt;p&gt;Perplexity Pro: free for a year. I found a PayPal marketing campaign last year that gave it away. Barely use it now, but it's there when I need quick research without leaving the browser.&lt;/p&gt;

&lt;p&gt;pCloud: $30 a month for cloud storage. Not glamorous, but everything I produce needs to live somewhere that isn't my local drive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Distribution: Post For Me + Supabase + Vercel ($0)
&lt;/h2&gt;

&lt;p&gt;My website runs on Supabase and Vercel, both on free tiers. Post For Me handles publishing across X, Threads, Instagram, LinkedIn, TikTok, and YouTube.&lt;/p&gt;

&lt;p&gt;I built a 650-line integration skill so that publishing a post is one command instead of logging into six platforms separately.&lt;/p&gt;

&lt;p&gt;Where it breaks: Post For Me sometimes flags authentication tokens as expired when they still work. The fix is to ignore the warning and post anyway. If it goes through, the token was fine. I've spent more time debugging phantom auth errors from this one tool than from everything else combined.&lt;/p&gt;

&lt;h2&gt;
  
  
  The full payroll
&lt;/h2&gt;

&lt;p&gt;If I draw this like a company org chart:&lt;/p&gt;

&lt;p&gt;CEO and final decision-maker: me&lt;br&gt;
Head of Strategy: Claude Code + Obsidian ($100/mo)&lt;br&gt;
Operations: 3 scheduled agents (included in Claude)&lt;br&gt;
Second Opinion: ChatGPT ($20/mo)&lt;br&gt;
Daily Execution: Clawdbot + MiniMax ($10/mo)&lt;br&gt;
Creative: Freepik Premium+ ($20/mo)&lt;br&gt;
Production: ComfyUI (free)&lt;br&gt;
Storage: pCloud ($30/mo)&lt;br&gt;
Distribution: Post For Me (free)&lt;br&gt;
Infrastructure: Supabase + Vercel (free)&lt;br&gt;
Research: Perplexity Pro (free, PayPal promo)&lt;/p&gt;

&lt;p&gt;Total monthly cost: about $180.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the org chart doesn't show
&lt;/h2&gt;

&lt;p&gt;Every one of these tools needs supervision. Claude needs well-written skill prompts or it produces generic garbage. Freepik still needs me inside the interface, doing the work manually. ComfyUI needs me to fix broken nodes every couple of weeks. Post For Me needs me to know when to ignore its own error messages.&lt;/p&gt;

&lt;p&gt;The pitch for building with AI is that you replace headcount with software. The reality is that you trade salary for supervision. The tools handle the labor. You handle the judgment. And the judgment part doesn't get cheaper the more tools you add.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tools</category>
      <category>automation</category>
    </item>
    <item>
      <title>Why Your Content Looks Fine and Gets Ignored</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Thu, 09 Apr 2026 05:23:16 +0000</pubDate>
      <link>https://dev.to/igorgridel/why-your-content-looks-fine-and-gets-ignored-5ddd</link>
      <guid>https://dev.to/igorgridel/why-your-content-looks-fine-and-gets-ignored-5ddd</guid>
      <description>&lt;p&gt;I posted the launch of my Patreon on March 31. The exact text:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My first Patreon post is live and free. It's perfect for real estate agents who want to create listing videos without spending a fortune. Please share it with anyone who might be interested. Thank you!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thirty impressions, two likes from people who already follow me, zero replies. That was the entire result.&lt;/p&gt;

&lt;p&gt;The next day I posted about the same Patreon, but framed differently:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I launched my Patreon yesterday and got my first paid subscriber, and it's not a friend. It's actually someone who asked me for consulting with ComfyUI. Instead of paying $50 per hour for consulting, he's paying $50 per month worth of consulting. (Crazy value for him, and a recurring revenue for me.)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Fifty-three impressions, four likes, and one reply from someone I had never spoken to. The reply was the first sign that anything I had written that week had touched someone outside my existing circle. By every measure that mattered to me, it was the best post of the week.&lt;/p&gt;

&lt;p&gt;Same product. Same week. Same audience. One was the worst thing I posted all month. The other was the best. I needed to know why, because if I did not figure it out, I was going to keep writing posts that "looked fine" and getting nothing back for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fine is the wrong target
&lt;/h2&gt;

&lt;p&gt;Most writing advice treats engagement as a craft problem. Tighter hooks, stronger verbs, cleaner structure, better CTAs. I had been doing all of that for months. The real estate post is, by every craft standard, a fine post. Subject and predicate, clear value proposition, polite ask. Nothing wrong with it.&lt;/p&gt;

&lt;p&gt;That was the problem. Nothing wrong with it. Nothing right with it either.&lt;/p&gt;

&lt;p&gt;The subscriber post breaks half the rules a writing guide would tell you to follow. It is a long run-on. It opens with a chronology instead of a hook. It buries the most quotable line inside a parenthetical at the end. And it works, because the parts that look like flaws are the parts doing the actual work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the brain is actually responding to
&lt;/h2&gt;

&lt;p&gt;Reading is not silent. You subvocalize as you read, hearing each sentence in your own voice, and your brain processes the rhythm using the same systems it uses for spoken prosody and music, even when nobody is making sound.&lt;/p&gt;

&lt;p&gt;When a writer establishes a pattern and breaks it at the moment of impact, the reader's brain notices below the level of conscious thought. Long flowing sentence followed by a short one. A specific number after a general claim. A throwaway aside after a confident statement. The break is what neuroscientists call a prediction error, and prediction errors fire dopamine. That is the literal mechanism behind a sentence that gives you a small chill.&lt;/p&gt;

&lt;p&gt;The real estate post never breaks any pattern, because it never establishes one. Every sentence is roughly the same length, the same register, the same level of abstraction. There is nothing for the reader's brain to entrain to and nothing to register a break against. No prediction error, no dopamine, no felt response.&lt;/p&gt;

&lt;p&gt;That is what "fine" means neurologically. Nothing happened in the reader's body.&lt;/p&gt;

&lt;h2&gt;
  
  
  The other thing missing: actual specifics
&lt;/h2&gt;

&lt;p&gt;"My first Patreon post is live and free" is true. It is also identical to a thousand other launch tweets. The reader's brain processes it once, in the verbal system, and forgets it.&lt;/p&gt;

&lt;p&gt;"He's paying $50 per month worth of consulting" activates the verbal system AND the mathematical system AND the comparison system, because the line before it said "$50 per hour." The reader does the math without realizing it. Two numbers, same dollar amount, completely different cost structure. That is a tiny prediction error happening inside a single sentence.&lt;/p&gt;

&lt;p&gt;This is what specificity actually does. It is not decoration. It recruits more brain regions per word, which means deeper processing, stronger memory traces, and a real emotional response. The dollar amounts in that post are not "good copywriting." They are five extra cognitive systems firing in the background.&lt;/p&gt;

&lt;p&gt;The fix for most fine posts is not to write more. It is to replace one abstraction with one detail only you would know. The specific number. The specific person who asked you. The specific sentence the subscriber sent you when they signed up. Half a sentence is enough. That small detail is how the reader's brain knows you were actually there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "I" that costs something
&lt;/h2&gt;

&lt;p&gt;James Pennebaker spent his career studying the linguistic fingerprint of honesty. The clearest finding from his lab: truth-tellers use more first-person singular pronouns. Liars avoid self-reference because they do not feel ownership of what they are saying. When his team trained a text classifier on real lying versus truth-telling, first-person usage was one of the strongest signals, with classification accuracy in the mid-60s on that feature alone.&lt;/p&gt;

&lt;p&gt;The real estate post has one "my" in it. After that, every sentence is about the product or the audience. "It's perfect for real estate agents." "Please share it with anyone." The voice has stepped out of the room.&lt;/p&gt;

&lt;p&gt;The subscriber post is the opposite. "I launched," "I got," "he's paying me." Heavy first-person from start to finish, which Pennebaker's data reads as honest. Nothing is performed. The writer is in the room.&lt;/p&gt;

&lt;p&gt;That is not a stylistic choice. It is the difference between content the brain processes and content the brain trusts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a share actually says about the sharer
&lt;/h2&gt;

&lt;p&gt;In 2011 the New York Times Customer Insight Group ran a study with Latitude Research on more than 2,500 people, looking at why people share content online. The headline finding: 68% share to give people a better sense of who they are and what they care about. Sharing is not endorsement. It is identity signaling.&lt;/p&gt;

&lt;p&gt;This is the test most fine content quietly fails. Ask: if someone shared this post, what would it say about them?&lt;/p&gt;

&lt;p&gt;If the answer is "nothing specific," the share rate is going to be low. People share things that articulate something they vaguely believed but had never quite said out loud. Sharing makes them look thoughtful, informed, ahead of the curve. The real estate post says nothing about whoever shares it. The subscriber post, with its $50/hr to $50/mo flip, says "I think clearly about pricing and value." Different signal. Different incentive to share.&lt;/p&gt;

&lt;h2&gt;
  
  
  The check I built after that week
&lt;/h2&gt;

&lt;p&gt;The subscriber post and the real estate post were not the only data I had. I had nine posts from the first week of writing seriously, March 31 through April 5, and the gap between the best and worst was wider than anything craft alone could explain. I had also just spent two days reading research on the neuroscience of aesthetic chills, the linguistic fingerprint of honesty, and the way readers physically respond to rhythm. The patterns in the research and the patterns in my own analytics matched almost exactly.&lt;/p&gt;

&lt;p&gt;So I built a five-question check, and I run it on every post before it goes out. If it scores below three out of five, the post goes back to polish.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Does it have one sensory or specific detail only I would know?&lt;/li&gt;
&lt;li&gt;Does the sentence rhythm actually change when I read it out loud?&lt;/li&gt;
&lt;li&gt;Is there a moment where the reader's expectation gets inverted?&lt;/li&gt;
&lt;li&gt;Does it use "I" and show ownership of what is being said?&lt;/li&gt;
&lt;li&gt;If someone shared this, would it say something specific about them?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The real estate post scores one out of five. The subscriber post scores four out of five. That is not a craft gap. Those are two different orders of writing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The uncomfortable part
&lt;/h2&gt;

&lt;p&gt;Writing that actually moves people is more expensive than writing that is fine.&lt;/p&gt;

&lt;p&gt;It requires putting a real number in instead of a vague claim. It requires writing "I" when you could safely say "founders." It requires admitting the subscriber is the first one, not the hundredth. It requires letting a sentence end without adding a safety summary after it. It requires trusting the reader enough to leave a gap.&lt;/p&gt;

&lt;p&gt;Fine content protects the writer. Specific, owned content exposes them. That is not a craft gap. It is a vulnerability gap.&lt;/p&gt;

&lt;p&gt;If your content looks fine and gets ignored, the work is probably not harder editing. It is one more honest sentence at the start, one fewer sanitized sentence at the end, and at least one detail you would normally cut because it feels too small to matter. That detail is what tells the reader you were actually there.&lt;/p&gt;

</description>
      <category>writing</category>
      <category>marketing</category>
      <category>psychology</category>
      <category>content</category>
    </item>
    <item>
      <title>Why "Optimize Your Images" Is Bad Advice</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Mon, 06 Apr 2026 15:27:28 +0000</pubDate>
      <link>https://dev.to/igorgridel/why-optimize-your-images-is-bad-advice-5oj</link>
      <guid>https://dev.to/igorgridel/why-optimize-your-images-is-bad-advice-5oj</guid>
      <description>&lt;p&gt;I think a lot of founders hear "optimize your images" and still do not know what that means in practice.&lt;/p&gt;

&lt;p&gt;Usually it is some mix of wrong format, wrong dimensions, duplicate exports, hero image too heavy. Not bad code, just bad file decisions that nobody thought about at launch.&lt;/p&gt;

&lt;p&gt;I kept hitting the same cleanup on different projects. The steps were always the same: find the bloated images, figure out which ones are safe to convert, which ones need manual review, and which ones you should not touch at all.&lt;/p&gt;

&lt;p&gt;So I turned it into a Claude Code skill.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Image Audit does
&lt;/h2&gt;

&lt;p&gt;The skill runs two audits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo audit.&lt;/strong&gt; It walks through your codebase looking for PNG, JPG, and other image files. For each one, it checks file size, format, and how it is referenced in your code. It flags anything over 500KB and identifies images that could safely be converted to WebP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase audit.&lt;/strong&gt; If you have a Supabase storage bucket, it scans that too. Same checks: format, size, duplicates, and whether the image is actually referenced anywhere.&lt;/p&gt;

&lt;p&gt;After both audits, you get a migration plan. Not a list of commands to run blindly. A plan that separates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Safe WebP wins (images that can be converted without risk)&lt;/li&gt;
&lt;li&gt;Manual review items (screenshots, logos, transparent assets, UI elements)&lt;/li&gt;
&lt;li&gt;Leave-alone items (already optimized, or too risky to touch)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why audit-first matters
&lt;/h2&gt;

&lt;p&gt;The obvious approach is to just convert everything to WebP and call it done. That breaks things.&lt;/p&gt;

&lt;p&gt;Screenshots with text get blurry. Logos with transparency lose quality. UI assets with precise pixel borders get artifacts. Hero images that were carefully exported at specific dimensions get resized wrong.&lt;/p&gt;

&lt;p&gt;I intentionally made this conservative. I would rather have a tool people trust than an "optimizer" that silently damages assets.&lt;/p&gt;

&lt;p&gt;The skill checks references before changing anything. If an image is imported in your code, it updates the references too. If it cannot verify the reference chain, it puts the image in manual review instead of guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;

&lt;p&gt;Ask your coding agent to install it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx image-audit-skill &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--claude&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or grab it from the repo: &lt;a href="https://github.com/Saamael/image-audit-skill" rel="noopener noreferrer"&gt;github.com/Saamael/image-audit-skill&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;The current version supports Supabase storage out of the box, but the concept is flexible. It can do the same for other platforms. The idea is simple: find the images that are costing you performance, convert them to an optimized format, and leave the risky ones alone.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
    <item>
      <title>The 80/80 Paradox: Why Nearly Everyone Has AI Tools and Nearly No One Has AI Results</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Sun, 05 Apr 2026 14:51:45 +0000</pubDate>
      <link>https://dev.to/igorgridel/the-8080-paradox-why-nearly-everyone-has-ai-tools-and-nearly-no-one-has-ai-results-53ci</link>
      <guid>https://dev.to/igorgridel/the-8080-paradox-why-nearly-everyone-has-ai-tools-and-nearly-no-one-has-ai-results-53ci</guid>
      <description>&lt;p&gt;Roughly 80% of companies now report using generative AI in some capacity. That is not a prediction. That is the current state according to McKinsey's latest data.&lt;/p&gt;

&lt;p&gt;Here is the number that should bother you more.&lt;/p&gt;

&lt;p&gt;Nearly as many of those companies report no significant bottom-line impact from that usage.&lt;/p&gt;

&lt;p&gt;The tools are everywhere. The results are not.&lt;/p&gt;

&lt;p&gt;This is the 80/80 paradox. If you have been feeling like your AI stack is more overhead than leverage, you are not imagining things. You are experiencing the documented pattern.&lt;/p&gt;

&lt;p&gt;The problem is not that the tools are bad. The problem is that adoption is not integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The adoption-integration gap
&lt;/h2&gt;

&lt;p&gt;Let me describe what AI adoption actually looks like for most founders and operators.&lt;/p&gt;

&lt;p&gt;You have ChatGPT open in a tab. Claude in another. You tried Perplexity for research. You have a Notion AI subscription you forget to use. You signed up for three different writing tools during their launch weeks. You have a folder of bookmarked agent demos you meant to explore.&lt;/p&gt;

&lt;p&gt;This is adoption. This is not integration.&lt;/p&gt;

&lt;p&gt;Integration means the AI is woven into how work actually gets done. The tool has a clear job, a clear trigger, and a clear output that connects to the next step. You do not have to remember to use it because it is part of the workflow, not a side quest.&lt;/p&gt;

&lt;p&gt;McKinsey found that roughly 90% of vertical AI use cases are stuck in pilot mode. They work in demos. They impress in presentations. They never make it to production.&lt;/p&gt;

&lt;p&gt;Asana has a name for this: pilot purgatory. The companies stuck there are what they call nonscalers. They bolt AI onto broken workflows and wonder why nothing changes.&lt;/p&gt;

&lt;p&gt;The scalers do something different. They redesign work around AI instead of adding AI to existing work.&lt;/p&gt;

&lt;p&gt;This is the gap. Not access. Not capability. Workflow redesign.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tool sprawl is the new technical debt
&lt;/h2&gt;

&lt;p&gt;Google Cloud has started using a term I find clarifying: AI Sprawl.&lt;/p&gt;

&lt;p&gt;It describes what happens when organizations accumulate AI tools without governance, without integration standards, and without clear ownership. The result is fragmentation, redundancy, and friction.&lt;/p&gt;

&lt;p&gt;This is not just an enterprise problem. It is a founder problem. It is an operator problem. It is a "why do I have eleven AI subscriptions and still feel like I am not getting leverage" problem.&lt;/p&gt;

&lt;p&gt;Microsoft's 2025 Work Trend Index found that employees are interrupted 275 times a day. That is not a typo.&lt;/p&gt;

&lt;p&gt;People do not need more AI tabs. They need fewer handoffs. They need tools that reduce context switching, not tools that add another place to check.&lt;/p&gt;

&lt;p&gt;More AI tabs does not equal more leverage. Often it equals more friction dressed up as productivity.&lt;/p&gt;

&lt;p&gt;There is a difference between stack envy and stack fit. Stack envy is wanting the tools you see other people using. Stack fit is having the tools that actually work for how you work.&lt;/p&gt;

&lt;p&gt;The best AI stack is not the biggest stack. It is the stack that survives contact with real work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the serious operators are doing differently
&lt;/h2&gt;

&lt;p&gt;Anthropic published guidance on building effective agents. The core recommendation is almost comically simple: do the simplest thing that works.&lt;/p&gt;

&lt;p&gt;Start with simple, composable patterns. Add complexity only when you have evidence that the simpler approach is failing. Do not build multi-agent orchestration systems because they sound impressive. Build them because you have a genuine coordination problem that simpler approaches cannot solve.&lt;/p&gt;

&lt;p&gt;This is the simple first discipline. It is the opposite of the "let me show you my agent swarm" energy that dominates AI Twitter.&lt;/p&gt;

&lt;p&gt;The serious operators are also paying attention to interoperability. Anthropic's Model Context Protocol is now supported by Google and OpenAI. This is not a minor technical detail. It means the question is shifting from "which tools do I use" to "how do my tools connect."&lt;/p&gt;

&lt;p&gt;The stack is becoming less about fixed apps and more about a connected layer of context, tools, and actions. If your current setup cannot talk to itself, you are building on sand.&lt;/p&gt;

&lt;p&gt;And once a workflow matters, observability matters more than capability. OpenAI and McKinsey both emphasize tracing, evaluations, and compliance controls for scalable agent systems. The production question is not "can it do this?" It is "can I trust, debug, and maintain it?"&lt;/p&gt;

&lt;p&gt;If you cannot see what your AI is doing, you cannot improve it. If you cannot improve it, you do not have a system. You have a demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  The permission structure
&lt;/h2&gt;

&lt;p&gt;Here is what I want to give you: permission.&lt;/p&gt;

&lt;p&gt;Permission to stop accumulating. You do not need to try every new AI tool. You do not need to have an opinion on every launch. You do not need to feel behind because someone on Twitter is using something you have not heard of.&lt;/p&gt;

&lt;p&gt;Permission to audit ruthlessly. Look at your subscriptions. Look at your tabs. Ask: what actually survives contact with real work? What do I reach for without thinking? What have I not opened in three weeks?&lt;/p&gt;

&lt;p&gt;Permission to let go. Some tools were right for the exploration phase. They are not right for the integration phase. Letting them go is not failure. It is maturity.&lt;/p&gt;

&lt;p&gt;Deloitte put it clearly: most organizations move at the speed of organizational change, not the speed of technology.&lt;/p&gt;

&lt;p&gt;The bottleneck is not the tools. The bottleneck is your capacity to actually integrate them into how work gets done.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack that survives contact with real work
&lt;/h2&gt;

&lt;p&gt;The 80/80 paradox is not a mystery. It is a documentation of what happens when adoption outpaces integration.&lt;/p&gt;

&lt;p&gt;The fix is not more tools. The fix is fewer tools with clearer jobs, connected to real workflows, with enough observability that you can trust and improve them.&lt;/p&gt;

&lt;p&gt;Start simple. Add complexity only when justified. Audit ruthlessly. Let go of what does not survive real use.&lt;/p&gt;

&lt;p&gt;The best AI stack is not the one that looks impressive in a screenshot. It is the one that actually produces results.&lt;/p&gt;

&lt;p&gt;If this framing is useful, I write about AI workflows, product building, and operator systems at &lt;a href="https://igorgridel.com" rel="noopener noreferrer"&gt;igorgridel.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>workflow</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Consistency Is a Multiplier, Not a Strategy</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Sat, 04 Apr 2026 09:53:16 +0000</pubDate>
      <link>https://dev.to/igorgridel/consistency-is-a-multiplier-not-a-strategy-6bp</link>
      <guid>https://dev.to/igorgridel/consistency-is-a-multiplier-not-a-strategy-6bp</guid>
      <description>&lt;p&gt;I posted consistently for two years. It felt productive. It produced almost nothing.&lt;/p&gt;

&lt;p&gt;I showed up. I published. I followed the advice that saturates every founder content thread: just be consistent. Post daily. The algorithm rewards activity. Build the habit and the results will follow.&lt;/p&gt;

&lt;p&gt;The results did not follow.&lt;/p&gt;

&lt;p&gt;I got impressions. I got some engagement. I did not get the thing I actually needed: people who understood what I do, trusted my judgment, and wanted to work with me or support what I was building.&lt;/p&gt;

&lt;p&gt;It took me too long to understand why.&lt;/p&gt;

&lt;p&gt;Consistency is a multiplier. The problem is, it multiplies whatever is already there. If your positioning is clear, consistency compounds recognition. If your positioning is vague, consistency compounds noise.&lt;/p&gt;

&lt;p&gt;I was compounding noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  What platforms actually reward now
&lt;/h2&gt;

&lt;p&gt;The old model was simple. Post more, get more reach. The algorithm rewarded activity. Fill the calendar, win the game.&lt;/p&gt;

&lt;p&gt;That model is dead.&lt;/p&gt;

&lt;p&gt;LinkedIn now uses larger recommender models and LLM support to understand what posts are actually about and how professional interests evolve. The feed is not just counting posts. It is trying to predict what specific people will find relevant.&lt;/p&gt;

&lt;p&gt;YouTube recommendations use satisfaction signals, including survey data, not just raw watch time. The platform is asking: did this actually help the viewer? Not just: did they watch?&lt;/p&gt;

&lt;p&gt;Instagram ranking is individualized across Feed, Stories, Explore, and Reels. The system is trying to predict what specific people care about, not just what is popular.&lt;/p&gt;

&lt;p&gt;Google Search is becoming multimodal, conversational, and AI-assisted. Discovery spans multiple surfaces. The question is not just whether you published, but whether what you published is useful, findable, and trustworthy.&lt;/p&gt;

&lt;p&gt;Volume is no longer the bottleneck. Clarity is.&lt;/p&gt;

&lt;p&gt;When everyone can publish more, the advantage moves to clarity, credibility, and system design.&lt;/p&gt;

&lt;h2&gt;
  
  
  The multiplier problem
&lt;/h2&gt;

&lt;p&gt;Here is the trap I fell into.&lt;/p&gt;

&lt;p&gt;I was posting regularly, but I had not made the hard choices about positioning. I had not decided what category I wanted to own. I had not clarified who I was trying to reach. I had not built a system that converted attention into anything durable.&lt;/p&gt;

&lt;p&gt;I was just showing up.&lt;/p&gt;

&lt;p&gt;Output can camouflage strategic vagueness. Daily activity feels productive. It can substitute for the hard work of deciding what you actually want to be known for.&lt;/p&gt;

&lt;p&gt;If your content does not teach people what category to place you in, frequency only scales confusion.&lt;/p&gt;

&lt;p&gt;The internet does not reward posting. It rewards useful signal.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four-filter diagnostic
&lt;/h2&gt;

&lt;p&gt;I now run every piece of content through four filters before I publish. If it fails any of them, I either fix it or I do not post it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filter 1: Positioning
&lt;/h3&gt;

&lt;p&gt;Does this content teach people what category to place me in?&lt;/p&gt;

&lt;p&gt;If someone reads this, will they understand what I do and who I help? Or will they just think I am smart and interesting without knowing what to do with that?&lt;/p&gt;

&lt;p&gt;Content without positioning makes it harder for both audiences and algorithms to understand what lane you own. You become a generalist in a world that rewards specialists.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filter 2: Distribution
&lt;/h3&gt;

&lt;p&gt;Does this content reach the right people, not just any people?&lt;/p&gt;

&lt;p&gt;Impressions are not the goal. Reaching people who might actually care is the goal. A thousand views from the wrong audience is worth less than fifty views from the right one.&lt;/p&gt;

&lt;p&gt;Distribution is not just about posting. It is about understanding where your people are, what they are searching for, and how they discover new voices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filter 3: Authority
&lt;/h3&gt;

&lt;p&gt;Does this content make me more believable, not just more visible?&lt;/p&gt;

&lt;p&gt;Visibility without credibility is noise. The question is not whether people saw you. The question is whether they trust you more after seeing you.&lt;/p&gt;

&lt;p&gt;The 2025 Edelman/LinkedIn B2B Thought Leadership report found that 95% of hidden decision-makers say strong thought leadership makes them more receptive to sales or marketing outreach. The stat is striking, but the logic is simple: people buy from people they trust. Content that builds trust is more valuable than content that just builds reach.&lt;/p&gt;

&lt;p&gt;Founders do not need more content. They need more believable content.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filter 4: Capture
&lt;/h3&gt;

&lt;p&gt;Does this content build an owned relationship, or is it a temporary performance?&lt;/p&gt;

&lt;p&gt;A post that gets attention but builds no owned relationship is a temporary performance, not a strategic asset. The platform owns the reach. You own nothing.&lt;/p&gt;

&lt;p&gt;Email is the owned asset. Subscribers are the durable relationship. Content that does not route toward capture is content that evaporates.&lt;/p&gt;

&lt;p&gt;A founder content system is only as strong as its weakest filter.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real job of founder content
&lt;/h2&gt;

&lt;p&gt;I used to think the job of content was to stay visible. Keep showing up. Stay top of mind. Be present.&lt;/p&gt;

&lt;p&gt;That is not the job.&lt;/p&gt;

&lt;p&gt;The real job of founder content is to reduce uncertainty.&lt;/p&gt;

&lt;p&gt;Reduce uncertainty about what you do. Reduce uncertainty about who you help. Reduce uncertainty about whether you are credible.&lt;/p&gt;

&lt;p&gt;When someone encounters your content, they should leave with less confusion, not more. They should understand your category better. They should trust your judgment more. They should know what to do next if they want to go deeper.&lt;/p&gt;

&lt;p&gt;The post is not the asset. The idea system behind the post is the asset.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I do differently now
&lt;/h2&gt;

&lt;p&gt;I start with positioning clarity before I touch a content calendar. I ask: what do I want to be known for? What category do I want to own? Who am I trying to reach?&lt;/p&gt;

&lt;p&gt;Before I publish anything, I ask: does this piece make my category clearer or fuzzier?&lt;/p&gt;

&lt;p&gt;I build for capture, not just reach. Every piece of content should have a path toward an owned relationship. If it does not, I ask why I am publishing it.&lt;/p&gt;

&lt;p&gt;I treat consistency as the accelerant, not the strategy. I post regularly, but only after the hard choices are made. Consistency compounds the right thing when the right thing is already in place.&lt;/p&gt;

&lt;h2&gt;
  
  
  The reframe
&lt;/h2&gt;

&lt;p&gt;The advice to be consistent is not wrong. It is incomplete.&lt;/p&gt;

&lt;p&gt;Consistency matters. But it matters after the hard choices are made. It matters after you know what you are trying to be known for. It matters after you have a system that converts attention into trust and trust into something durable.&lt;/p&gt;

&lt;p&gt;When everyone can publish more, the advantage moves to clarity, credibility, and system design.&lt;/p&gt;

&lt;p&gt;The real job of founder content is to reduce uncertainty.&lt;/p&gt;

&lt;p&gt;If you are posting consistently and it is not working, the problem is probably not frequency. The problem is probably one of the four filters.&lt;/p&gt;

&lt;p&gt;Fix the filter. Then let consistency do its job.&lt;/p&gt;

&lt;p&gt;If this was useful, you can subscribe to get more writing like this. I write about AI workflows, product building, and the operational lessons from building multiple businesses as a solo founder. You can also find more at igorgridel.com.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://igorgridel.com/blog/consistency-is-a-multiplier-not-a-strategy" rel="noopener noreferrer"&gt;Igor Gridel&lt;/a&gt;. Follow me for more on AI workflows and automation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>content</category>
      <category>founders</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>I Automated My Content Pipeline with Claude Code. Here's Everything.</title>
      <dc:creator>Igor Gridel</dc:creator>
      <pubDate>Sat, 04 Apr 2026 09:30:29 +0000</pubDate>
      <link>https://dev.to/igorgridel/i-automated-my-content-pipeline-with-claude-code-heres-everything-1a58</link>
      <guid>https://dev.to/igorgridel/i-automated-my-content-pipeline-with-claude-code-heres-everything-1a58</guid>
      <description>&lt;p&gt;Claude keeps getting confused by Post For Me.&lt;/p&gt;

&lt;p&gt;Every time I ask it to schedule something, it forgets the flow. Mixes up steps. Makes the same mistakes on repeat. The API itself is good, the docs are clear, but Claude cannot hold the whole workflow in its head across conversations.&lt;/p&gt;

&lt;p&gt;This is the part of AI automation nobody warns you about. The AI works. The tool works. The connection between them doesn't exist, and you have to build it yourself.&lt;/p&gt;

&lt;p&gt;So I did.&lt;/p&gt;

&lt;p&gt;I wrote a 659-line skill file that teaches Claude the full Post For Me API. How to list accounts, check for duplicates before posting, handle platform differences between X, Threads, and Instagram, verify that posts actually went through, and deal with errors when they don't.&lt;/p&gt;

&lt;p&gt;Post For Me is a social media posting API by Matt Roth and Caleb Panza. One SDK that publishes to X, Threads, Instagram, TikTok, LinkedIn, YouTube, and more. The tool is genuinely good and inexpensive. The problem was never the API. It was that Claude had no reliable reference for how to use it correctly.&lt;/p&gt;

&lt;p&gt;The skill solved that. Claude reads it before every posting interaction and stopped making mistakes.&lt;/p&gt;

&lt;p&gt;But the skill turned out to be just one piece of something bigger.&lt;/p&gt;

&lt;h2&gt;
  
  
  The automation stack
&lt;/h2&gt;

&lt;p&gt;I now have three agents running on a schedule through Claude Code. None of them auto-post. They suggest, I decide.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;ops sweep&lt;/strong&gt; runs every four hours. It checks my Obsidian inbox for unprocessed items, flags stale drafts, and spots posting gaps. One suggestion, max 100 words. Quick desk check, nothing more.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;morning briefing&lt;/strong&gt; runs at 9am. Full daily review: yesterday's posting activity, engagement analytics pulled from Post For Me, inbox status, draft pipeline, platform balance, Patreon status, and one priority for today.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;weekly strategist&lt;/strong&gt; runs Saturday mornings. Deep review: best and worst posts with pattern analysis, content gaps, video ideas from top performers, project recaps, three priorities for next week.&lt;/p&gt;

&lt;p&gt;The pattern is the same for all three. Look at the system, surface what matters, suggest one thing. No auto-posting, no autonomous decisions. I spent time thinking about full automation and realized what I actually wanted wasn't "post for me automatically." It was "review everything and tell me what matters right now."&lt;/p&gt;

&lt;p&gt;That turned out to be far more useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it all connects
&lt;/h2&gt;

&lt;p&gt;Everything runs through an Obsidian vault. Ideas get captured in the inbox from voice notes, text dumps, screenshots, conversations. They get processed and routed. Developed into post options. Polished. Humanized if they still sound flat. Run through a quality check. Then Post For Me handles the publishing.&lt;/p&gt;

&lt;p&gt;Each stage has its own Claude Code skill. One turns raw ideas into post options. One polishes drafts. One fixes robotic-sounding text. The Post For Me skill handles the final step.&lt;/p&gt;

&lt;p&gt;Published posts get tracked. Performance gets reviewed by the agents. Top performers get analyzed and fed back into the system. It loops.&lt;/p&gt;

&lt;p&gt;The Obsidian vault is the operating system. The skills are the tools. The agents are the assistants that keep checking the system without being asked.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned building this
&lt;/h2&gt;

&lt;p&gt;Build the skill first. Before the agents, before the pipeline. The skill is what makes the posting step reliable, and that's the step where everything used to break.&lt;/p&gt;

&lt;p&gt;A skill isn't code. It's a markdown document that Claude reads before it acts. I was surprised how well that works. You write down the exact steps, the edge cases, the mistakes to avoid, and Claude follows them. No plugin, no extension, just a text file.&lt;/p&gt;

&lt;p&gt;The ops sweep is the agent I use most. Every four hours, a quick check on the whole system. That rhythm is what keeps things moving without me having to remember anything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The full Post For Me skill
&lt;/h2&gt;

&lt;p&gt;The complete 659-line skill file is available for download. If you're using Claude Code, save it as SKILL.md in your .claude/skills/post-for-me/ folder and it works immediately.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stqvyvrxhdtkvivegeut.supabase.co/storage/v1/object/public/downloads/post-for-me-skill.md" rel="noopener noreferrer"&gt;Download the Post For Me skill file&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you build something similar or want to talk through the setup, I'm on &lt;a href="https://www.patreon.com/igorgridel" rel="noopener noreferrer"&gt;Patreon&lt;/a&gt;. There's also a Discord, but getting access is part of the game. It's not as simple as clicking a link.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://igorgridel.com/blog/automated-content-pipeline-claude-code-post-for-me" rel="noopener noreferrer"&gt;Igor Gridel&lt;/a&gt;. Follow me for more on AI workflows and automation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
