<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Christopher Groß</title>
    <description>The latest articles on DEV Community by Christopher Groß (@grossbyte).</description>
    <link>https://dev.to/grossbyte</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/grossbyte"/>
    <language>en</language>
    <item>
      <title>Context Engineering: Why Your Prompt Is the Smallest Problem</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Wed, 08 Apr 2026 09:33:27 +0000</pubDate>
      <link>https://dev.to/grossbyte/context-engineering-why-your-prompt-is-the-smallest-problem-3li</link>
      <guid>https://dev.to/grossbyte/context-engineering-why-your-prompt-is-the-smallest-problem-3li</guid>
      <description>&lt;p&gt;I was at a client project recently, facing a situation I know well by now. New codebase, no AI tooling set up, and a developer on the team asks me: &lt;em&gt;"How do you get the AI to produce such good results?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I looked at his screen. He had Claude open - in the browser, no project context, a fresh chat for every single task.&lt;/p&gt;

&lt;p&gt;That's the problem. Not the tool, not the model, not the prompt. The missing structure.&lt;/p&gt;

&lt;p&gt;After a year of using AI daily in real projects, I'm convinced: the difference between "AI is pretty decent" and "AI has fundamentally changed how I work" isn't the perfect prompt. It's the context you give the AI - &lt;strong&gt;before it answers the first question&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  CLAUDE.md Is Where Everything Starts
&lt;/h2&gt;

&lt;p&gt;When I start working on a new project with Claude Code, the first thing I create is the &lt;code&gt;CLAUDE.md&lt;/code&gt;. Not the first component, not the first feature - the context file.&lt;/p&gt;

&lt;p&gt;What goes in it? Everything a new developer would need to know on day one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tech stack and versions&lt;/li&gt;
&lt;li&gt;Project structure and why it's set up that way&lt;/li&gt;
&lt;li&gt;Design system - colors, typography, spacing - concrete, with actual values&lt;/li&gt;
&lt;li&gt;Coding rules: what always applies, what never does&lt;/li&gt;
&lt;li&gt;External links, API endpoints, important accounts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That sounds like work. It is - &lt;strong&gt;once&lt;/strong&gt;. But after that I save myself from rebuilding that context in every single AI session. I don't explain "we use Tailwind, not Styled Components" or "all texts must go through i18n" anymore. It's in the file. Claude reads it, follows it.&lt;/p&gt;

&lt;p&gt;The practical difference: when I say "create a new section for the services page", I get code that uses my design system, my components, is TypeScript strict and contains no hardcoded strings. Not because Claude is magic - because it &lt;strong&gt;knows the rules&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  PRODUCT_SPEC.md - The Product Vision Written Down
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt; describes &lt;em&gt;how&lt;/em&gt; things are built. &lt;code&gt;PRODUCT_SPEC.md&lt;/code&gt; describes &lt;em&gt;what&lt;/em&gt; and &lt;em&gt;why&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I created it the first time when I realized: if I haven't touched a project for three weeks and then pick it back up, I need a moment to get back into the right frame of mind myself. The AI needs that even more.&lt;/p&gt;

&lt;p&gt;My &lt;code&gt;PRODUCT_SPEC.md&lt;/code&gt; covers things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the core promise of the product?&lt;/li&gt;
&lt;li&gt;Who are the users and what do they want?&lt;/li&gt;
&lt;li&gt;What features exist, and what design decisions are behind them?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;What was deliberately not built - and why?&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last point is underrated. "What we don't build" is just as important as "what we build". If I don't tell the AI that we've deliberately decided against a certain feature, it will suggest or even implement it the next time it seems relevant.&lt;/p&gt;




&lt;h2&gt;
  
  
  Externalizing Coding Rules - Not Explaining Them in the Prompt
&lt;/h2&gt;

&lt;p&gt;I've noticed that a lot of people include their rules in the prompt every time. "Don't write inline CSS. Don't use &lt;code&gt;any&lt;/code&gt; types in TypeScript. Always create a German and English version."&lt;/p&gt;

&lt;p&gt;And then they write that again for the next task.&lt;/p&gt;

&lt;p&gt;That's exhausting - and error-prone. At some point you forget it, or you phrase it slightly differently, and the AI interprets it slightly differently.&lt;/p&gt;

&lt;p&gt;My solution: a thorough section in the &lt;code&gt;CLAUDE.md&lt;/code&gt;. Everything that always applies goes there.&lt;/p&gt;

&lt;p&gt;A few examples from real projects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- ESLint must pass without errors before every commit
- No eslint-disable comments without explicit approval
- Every new text must appear in i18n/de.json and i18n/en.json
- Typography: en dash with non-breaking spaces, never the English em dash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That last rule is one anyone recognizes who has forgotten to specify it. I learned it by spending an hour fixing wrong dashes across an entire project.&lt;/p&gt;




&lt;h2&gt;
  
  
  Recurring Tasks as Skills
&lt;/h2&gt;

&lt;p&gt;This is the part that gets talked about the least - and the one that has helped me the most.&lt;/p&gt;

&lt;p&gt;Every project has tasks that always run the same way. After a new feature: run the linter, take a browser screenshot, comment on the ticket with the result. For a new blog post: validate frontmatter, run the typography check, create the German and English versions.&lt;/p&gt;

&lt;p&gt;I used to describe this in the prompt every time. "Don't forget to run the linter afterwards" - every single time.&lt;/p&gt;

&lt;p&gt;Now I put these workflows into skill files. A Markdown file that describes step by step what to do in a given context. I mention it briefly in the prompt: &lt;em&gt;"Use the &lt;code&gt;publish-blog-post&lt;/code&gt; skill."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That sounds like a small thing. In practice, it's what makes the difference between an AI I have to correct and an AI that simply does the right thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Define Data Models Before Writing Code
&lt;/h2&gt;

&lt;p&gt;This is a lesson I learned the hard way.&lt;/p&gt;

&lt;p&gt;I had given an AI the task of implementing a new feature with several database tables. The result was technically working - but the data model was wrong. Not wrong in the sense of "syntax error", but wrong in the sense of "this will be a problem in three months". Missing constraints, an N:M relation that should have been 1:N, and a field that semantically belonged in a different table.&lt;/p&gt;

&lt;p&gt;Since then I do it differently: before I let the AI loose on database work, I write a short model spec. In prose, no special syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Table: orders
- id: UUID, Primary Key
- user_id: FK → users.id, NOT NULL
- status: ENUM (pending, processing, shipped, cancelled), NOT NULL
- total_cents: INTEGER, NOT NULL (no DECIMAL - we calculate in cents)
- created_at: TIMESTAMP WITH TIME ZONE, NOT NULL, DEFAULT now()

Relations:
- One order has many OrderItems (1:N)
- One order belongs to exactly one user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That takes 15 minutes. It saves an hour of refactoring.&lt;/p&gt;




&lt;h2&gt;
  
  
  Context Engineering Instead of Prompt Engineering
&lt;/h2&gt;

&lt;p&gt;I notice the term "prompt engineering" showing up everywhere right now. Workshops, courses, LinkedIn posts about the perfect phrasing.&lt;/p&gt;

&lt;p&gt;I think that's the wrong focus.&lt;/p&gt;

&lt;p&gt;The prompt is the &lt;strong&gt;last step&lt;/strong&gt;. The system around it - &lt;code&gt;CLAUDE.md&lt;/code&gt;, &lt;code&gt;PRODUCT_SPEC.md&lt;/code&gt;, coding rules, skills, data models - is what actually determines quality. The AI is only as good as the context it receives. And that context can be built systematically, versioned and improved over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A good prompt in a poorly prepared project delivers mediocre results.&lt;br&gt;
A mediocre prompt in a well-prepared project delivers good results.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I call it &lt;strong&gt;context engineering&lt;/strong&gt;. Whoever understands this has a real advantage - not just with Claude, but with any AI tool.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tech stack, project structure, design tokens, coding rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PRODUCT_SPEC.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;What we build, why, and what we deliberately don't build&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skill files&lt;/td&gt;
&lt;td&gt;Recurring workflows defined once, referenced forever&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data model specs&lt;/td&gt;
&lt;td&gt;Written before code, not discovered through refactoring&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The better the structure around the AI, the better the results. That's not theory - that's the experience from a year of daily use in real projects.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Christopher, a freelance fullstack developer from Hamburg. I write about Vue/Nuxt, TypeScript and working with AI in real projects - at &lt;a href="https://grossbyte.io/en" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tooling</category>
      <category>webdev</category>
    </item>
    <item>
      <title>My Agent Workflow – From Idea to Deployment in Minutes</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Sun, 29 Mar 2026 23:21:50 +0000</pubDate>
      <link>https://dev.to/grossbyte/my-agent-workflow-from-idea-to-deployment-in-minutes-375n</link>
      <guid>https://dev.to/grossbyte/my-agent-workflow-from-idea-to-deployment-in-minutes-375n</guid>
      <description>&lt;h2&gt;
  
  
  A Bug, a Prompt, Done
&lt;/h2&gt;

&lt;p&gt;The other day I noticed: blog posts weren't showing up in my sitemap. Annoying, especially for SEO.&lt;/p&gt;

&lt;p&gt;In the past I'd have opened the sitemap module, read through the config, found the fix, tested, committed, deployed. Maybe 30 minutes, maybe an hour.&lt;/p&gt;

&lt;p&gt;Instead I typed one sentence into my terminal:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Blog posts are not included in the sitemap."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Five minutes later, the fix was live.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup: Three Agents, Clear Roles
&lt;/h2&gt;

&lt;p&gt;I use a multi-agent setup in &lt;strong&gt;Claude Code&lt;/strong&gt;. Three agents, clear responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lead Agent&lt;/strong&gt; – Plans, creates tickets, coordinates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Builder Agent&lt;/strong&gt; – Implements the code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tester Agent&lt;/strong&gt; – Runs real browser tests with screenshots&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The workflow is always the same: I give the lead agent a task – sometimes a sentence, sometimes a paragraph. It analyzes the problem, searches the codebase, creates a &lt;strong&gt;YouTrack ticket&lt;/strong&gt; with a detailed description and proposed solution. I glance at it, say "go" – and the rest happens.&lt;/p&gt;

&lt;p&gt;The lead delegates to the builder, which writes the code following my coding rules from &lt;code&gt;CLAUDE.md&lt;/code&gt;. Then the tester spins up a real browser, loads the page, checks if the problem is solved. Only when everything is green does the lead come back to me: ready for review.&lt;/p&gt;




&lt;h2&gt;
  
  
  A More Impressive Example
&lt;/h2&gt;

&lt;p&gt;The sitemap fix was trivial. But the workflow really shines on bigger tasks.&lt;/p&gt;

&lt;p&gt;When I wanted &lt;strong&gt;full WCAG accessibility&lt;/strong&gt; for my website, I essentially told the lead agent:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The website needs WCAG-compliant accessibility."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What happened:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The lead analyzed the scope – every page, every component, every form&lt;/li&gt;
&lt;li&gt;It created a ticket with a prioritized list of all necessary changes&lt;/li&gt;
&lt;li&gt;The builder systematically worked through every component – ARIA labels, focus management, contrast, skip links, focus traps&lt;/li&gt;
&lt;li&gt;The tester took screenshots after each iteration and verified accessibility&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Result: &lt;strong&gt;WCAG 2.1 Level AA in under 2 hours.&lt;/strong&gt; Estimated manual effort: 2–3 days.&lt;/p&gt;

&lt;p&gt;Not because the AI is magic, but because the workflow eliminates context switching and parallelizes the repetitive parts.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Actually Love About It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;No context loss.&lt;/strong&gt; I describe the problem once – the context is preserved from ticket through implementation to testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation happens automatically.&lt;/strong&gt; Every change is documented as a YouTrack ticket with description, proposed solution, and screenshots. No writing tickets after the fact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small fixes actually get done.&lt;/strong&gt; You know how you spot a typo or a small visual bug and think "I'll fix it later"? With this workflow, the barrier is so low I just do it immediately.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Part Nobody Wants to Hear
&lt;/h2&gt;

&lt;p&gt;This is not "AI does everything, I sit back."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In roughly every third or fourth task, I intervene.&lt;/strong&gt; Sometimes the AI interprets a requirement differently. Sometimes the code is technically correct but stylistically off. Sometimes it's faster to change three lines myself than explain what I want.&lt;/p&gt;

&lt;p&gt;That's fine. The workflow doesn't save 100% of my work – it saves &lt;strong&gt;70–80%&lt;/strong&gt;. The remaining 20–30% is where human judgment actually matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I review every single change. Every one.&lt;/strong&gt; Not because I don't trust the AI, but because it's my code running in production. Read the diff, check the logic, manually test critical paths. That usually takes 2–5 minutes per change – and those minutes are non-negotiable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Concrete Time Savings (Real Numbers)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Task&lt;/th&gt;
&lt;th&gt;With agents&lt;/th&gt;
&lt;th&gt;Manual estimate&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Sitemap bug&lt;/td&gt;
&lt;td&gt;5 minutes&lt;/td&gt;
&lt;td&gt;30–60 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Full WCAG accessibility&lt;/td&gt;
&lt;td&gt;~2 hours&lt;/td&gt;
&lt;td&gt;2–3 days&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;i18n text adjustments (DE + EN)&lt;/td&gt;
&lt;td&gt;3 minutes&lt;/td&gt;
&lt;td&gt;~20 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;New blog section&lt;/td&gt;
&lt;td&gt;1 day&lt;/td&gt;
&lt;td&gt;~1 week&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The leverage is greatest for tasks that touch many files, follow clear rules, and are repetitive. It's smallest for creative work, complex business logic, and architecture – that's still on me.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Need to Get Started
&lt;/h2&gt;

&lt;p&gt;Three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A good &lt;code&gt;CLAUDE.md&lt;/code&gt;&lt;/strong&gt; – Your project spec. Design system, coding rules, project structure. The better this file, the better the results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear agent definitions&lt;/strong&gt; – Each agent has a role, tools, and boundaries. This prevents chaos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A place for tasks&lt;/strong&gt; – Doesn't have to be a ticket system. Markdown files in your repo work fine. But personally, I prefer YouTrack – better history, easy referencing, agents comment directly in tickets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The setup takes a few hours. The time savings afterward are multiples of that.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Reality Behind the Hype
&lt;/h2&gt;

&lt;p&gt;AI agents aren't autopilot. They're more like a very fast, very patient junior developer who never gets tired and follows your specs exactly – &lt;strong&gt;as long as you state them clearly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The workflow works because I stay in control. I decide what gets built. I review what was built. I intervene when necessary. The agents accelerate execution – but the responsibility stays with me.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI agents don't replace developers. They replace the parts of the work that keep developers from focusing on what actually matters.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://grossbyte.io/en/blog/agent-workflow-from-idea-to-deployment" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Curious how the agent setup looks in detail? Happy to write a follow-up on the CLAUDE.md structure and agent definitions if there's interest.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Accessibility in 2 hours – with AI instead of days of manual work</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Mon, 23 Mar 2026 12:18:27 +0000</pubDate>
      <link>https://dev.to/grossbyte/accessibility-in-2-hours-with-ai-instead-of-days-of-manual-work-28hm</link>
      <guid>https://dev.to/grossbyte/accessibility-in-2-hours-with-ai-instead-of-days-of-manual-work-28hm</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://grossbyte.io/en/blog/accessibility-in-2-hours-with-ai" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Accessibility was on my list for months. I knew what needed to be done. I kept putting it off anyway — because it always felt like days of tedious work.&lt;/p&gt;

&lt;p&gt;Last week I just did it. With Claude Code CLI in under 2 hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  What was built
&lt;/h2&gt;

&lt;p&gt;Not just the basics – the result exceeds WCAG 2.1 Level AA in the areas covered, including support for system preferences that go beyond the standard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic HTML &amp;amp; ARIA on every interactive element&lt;/li&gt;
&lt;li&gt;Skip-to-content link, visible focus styles, focus trap in the mobile menu&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prefers-contrast: more/less&lt;/code&gt;, &lt;code&gt;prefers-reduced-motion&lt;/code&gt;, &lt;code&gt;prefers-reduced-transparency&lt;/code&gt;, &lt;code&gt;forced-colors: active&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Manual &lt;code&gt;.a11y-mode&lt;/code&gt; toggle in the header&lt;/li&gt;
&lt;li&gt;Fully accessible contact form with &lt;code&gt;aria-live&lt;/code&gt;, &lt;code&gt;aria-busy&lt;/code&gt;, &lt;code&gt;aria-describedby&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;iOS zoom fix for inputs below 16px font size&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What surprised me
&lt;/h2&gt;

&lt;p&gt;The quality. I expected to do a lot of manual follow-up. Instead, Claude Code implemented the ARIA patterns correctly – disclosure widgets, live regions, focus traps. These aren't trivial patterns.&lt;/p&gt;

&lt;p&gt;Also: &lt;code&gt;prefers-contrast: less&lt;/code&gt; and &lt;code&gt;prefers-reduced-transparency&lt;/code&gt; are CSS features I had never actively implemented before. Claude Code not only included them but chose sensible color values too.&lt;/p&gt;

&lt;h2&gt;
  
  
  The honest take
&lt;/h2&gt;

&lt;p&gt;AI doesn't replace understanding accessibility. You need to know what &lt;code&gt;aria-live&lt;/code&gt; does, why a focus trap is necessary, and whether the contrast values make sense. But the implementation – writing the labels, adjusting CSS variables, adding role attributes across dozens of files – that's exactly what AI does well.&lt;/p&gt;

&lt;p&gt;When it takes 2 hours instead of 2 days, there are no more excuses.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Full writeup with all implementation details: &lt;a href="https://grossbyte.io/en/blog/accessibility-in-2-hours-with-ai" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>a11y</category>
      <category>ai</category>
      <category>claudeai</category>
    </item>
  </channel>
</rss>
