DEV Community

Cover image for From Manual to Intent: 7 Years of CDK Contribution
Pahud Hsieh
Pahud Hsieh

Posted on

From Manual to Intent: 7 Years of CDK Contribution

Where It All Began: 2019 re:Invent

AWS CDK had just gone GA that year with TypeScript and Python support. At re:Invent 2019, I saw AWS present how to contribute to CDK for the first time. There was no AI back then — everything was manual. Clone the entire monorepo, figure out the Lerna project structure, manually build dependent packages, write L2 constructs, write tests, submit a PR. Every step was something you had to figure out on your own.

I was blown away.

A construct I write, a bug I fix — once it's merged, it ships with the next version of aws-cdk to the entire world. Developers everywhere use this thing every day, and I can directly change it. I thought that was incredibly cool.

I later wrote a post on community.aws called Contributing to AWS CDK, documenting the entire process so others wouldn't have to figure it all out from scratch like I did.

2019–2024: The Live Contribution Walkthrough Era

Over the next few years, I made a lot of live contribution walkthrough videos. I'd stream the whole thing — pick an issue, analyze it, implement the fix, write tests, submit the PR. All live.

In 2021, Werner Vogels called CDK a game changer.

That validated what many of us in the community already felt — CDK was becoming the way to build on AWS.

But honestly, the barrier was still there.

During that time I was completely passionate about writing CDK PRs. In 2020, I was at Penghu's MZG airport waiting for a flight, writing a Lambda filesystem support PR while my daughter played Animal Crossing next to me. All manual, no AI. That's just how it was — you did everything yourself.

A full CDK contribution involves quite a lot:

  • Understanding what the issue is about and how far the impact reaches
  • Finding the right module and files in a massive monorepo
  • Following CDK's own coding patterns (L1/L2 constructs, props interface design, etc.)
  • Writing unit tests and integration tests that meet the standards
  • Passing lint, build, and snapshot checks
  • Writing a PR description in the required format

Even experienced developers need hours or days. For newcomers, it's even worse.

The Pain of a First-Time Contributor

Imagine this scenario.
You're using CDK and you hit a bug, or you need a feature that doesn't exist yet. You want to help, so you open the aws-cdk GitHub repo, find the issue, and think: "I'll fix this!"
Then you open CONTRIBUTING.md.

It's incredibly long. It tells you how to set up your dev environment, install build tools, build the entire monorepo, run tests, handle snapshots, write PR descriptions…… but all you wanted was to change a few lines of code.

This is painful. You just wanted to make a small fix, but setting up the environment alone takes half a day, and it might not even build. A lot of people give up right here. My first time, I was up until midnight just getting the environment working. Honestly, I never wanted to do it manually again.

July 2025: Kiro and the Multi-Roles Pattern

After Kiro launched in July 2025, we started trying a new approach: composing a workflow pipeline from multiple agent roles. We called it the multi-roles pattern.

The idea is straightforward — analyzing an issue and writing code require different skills. Designing a solution and running tests are different too. Instead of having one agent do everything end to end (which usually doesn't work well), we split each phase out, assign it to a specialized agent role, and connect them with an orchestrator.

In practice, the results were much better than a single agent. Each role focuses on one thing, and the output quality clearly improves.

2025 re:Invent: From Multi-Roles to Power

At re:Invent that same year, Kiro released the Power feature. This let us formalize the multi-roles pattern — define explicit phases, set approval gates, control subagent execution order and parallelism.

The biggest change for me was that this was no longer just "letting AI write code." It became a structured engineering process with human approval checkpoints. AI handles the repetitive work it's good at, humans make the decisions.

We started designing CDK Contribution Power with this architecture.

My manager Joey Wang gave a talk at the Open Source Developers Lounge during 2025 re:Invent: "Automating Open Source Contributions with AI Agents — How We Use Multi-Agent Workflows to Maintain AWS CDK," showing how this multi-agent workflow works in practice.

February 2026: Agent SKILL

In February 2026, Kiro started supporting Agent SKILL.

This was a big turning point. SKILL is a standardized format that isn't tied to any specific agent tool. That means we can package the entire CDK Contribution workflow as a single skill, and it works across Kiro, Claude Code, Codex, Gemini, Copilot, OpenCode, and other compatible agent tools.

Before this, what we built with Power only ran inside Kiro. With SKILL, the same workflow works cross-platform. We'd always wanted this — not locking the workflow into a single tool.

The Design of CDK Contribution Skill

cdk-contribution-skill was officially released at the end of March 2026.

The core is a 6-phase orchestrated workflow, with clear inputs, outputs, and deliverables for each phase:

+------------------------------------------+
|           MAIN ORCHESTRATOR              |
|        (coordinates all phases)          |
+--------------------+---------------------+
                     |
     +---------------+--------------+
     |       PHASE 1: ANALYSIS      |
     |   Analyze issue, classify,   |
     |   explore affected code      |
     +---------------+--------------+
                     |
                     v
     +---------------+--------------+
     |       PHASE 2: PLANNING      |
     |   Propose solution, plan     |
     |   tests & impl approach      |
     +---------------+--------------+
                     |
                     v
     +-------------------------------+
     |      HUMAN APPROVAL GATE      |
     |   Review analysis + plan      |
     |   Continue? Yes | No          |
     +-------+---------------+-------+
       [NO]  |               |  [YES]
       back  |               |
       to    |               v
     PHASE 2 |   +---------------+--------------+
             |   |    PHASE 3: BUILD & IMPL     |
             |   |   Branch, env setup, code    |
             |   +---------------+--------------+
             |                   |
             |                   v
             |   +---------------+--------------+
             |   |   PHASE 4: PARALLEL VALID    |
             |   +-------+-------+-------+------+
             |           |       |       |
             |           v       v       v
             |       +------+ +-----+ +------+
             |       | TEST | |  QA | | DOCS |
             |       +--+---+ +--+--+ +--+---+
             |          +-------+-------+
             |                  |
             |                  v
             |   +-----------------------------+
             |   |    VALIDATION AGGREGATE     |
             |   | Any blocker? -> human gate  |
             |   +-------------+---------------+
             |                 |
             |                 v
     +---------------+--------------+
     |     PHASE 5: SELF REVIEW     |
     +----------+----------+--------+
                |          |
                v          v
        +------------+ +------------+
        |  SECURITY  | | REGRESSION |
        |   REVIEW   | |   REVIEW   |
        +------+-----+ +-----+------+
               +-------+------+
                       |
                       v
              +------------------+
              | SYNTHESIZE REPORT|
              +--------+---------+
                       |
                       v
     +-------------------------------+
     |      HUMAN APPROVAL GATE      |
     |   Go or No-Go?               |
     +-------+---------------+-------+
     [NO-GO] |               | [GO]
     fix and |               |
     re-run  |               v
     PHASE 4 |   +-------------------------------+
             |   |        PHASE 6: PR            |
             |   |   Commit, create PR           |
             |   +-------------------------------+
Enter fullscreen mode Exit fullscreen mode

Here are the key design decisions.

Human Approval Gates

After Phase 2 and Phase 5, the workflow stops and waits for human approval. This is mandatory — you can't skip it.
Why? Because CDK is an infrastructure tool, and a single breaking change can affect a huge number of users. AI can help you analyze, write code, and run tests, but "should we go in this direction" and "should we submit this PR" — those decisions have to be made by a human. I didn't want a fully automated system sending PRs to aws-cdk without anyone looking at them.
It comes down to trust but verify. AI hallucinates, writes code that looks correct but has security risks, and misses edge cases that cause regressions. Human review isn't a formality — it's the last line of defense against these things.
If a proposal is rejected at an approval gate, the workflow doesn't force forward. It goes back to planning, readjusts, and enters the next round of review. This isn't a one-way pipeline — it can loop back.
Structured Deliverables and Artifact Lifecycle
Each phase writes its output to markdown files under .kiro/contributions//. These files serve two purposes:

  1. Handoff interface between phases — the next phase's agent reads the previous phase's deliverable as input. This is the communication channel between agents.
  2. Review evidence — humans can read these files at approval gates to understand what the agent did and why.

These intermediate artifacts stay in the local working directory after PR submission — they don't go into the PR itself. They're records of the work process, not final deliverables.

Sequential + Parallel Hybrid Execution

Phase 1 through Phase 3 are strictly sequential — you can't start writing code before you've finished analyzing the issue. But Phase 4's three tasks (testing, QA, documentation) and Phase 5's two tasks (security review, regression review) run in parallel, because there are no dependencies between them.

The coordination works like this: each subtask independently produces its own deliverable, then the orchestrator collects all results and aggregates them into a summary. If any subtask reports a blocking issue, the entire phase is marked as needing human intervention. No subtask can override another's conclusions.

ASCII Diagrams

Every deliverable must include at least one ASCII diagram. This isn't for aesthetics — in a terminal environment, ASCII diagrams are the most reliable form of visualization, rendering correctly in any agent tool.

For example, Phase 1's analysis deliverable maps out affected file relationships:

  +----------------+       +-------------------+
  |  affected.ts   | ----> | test/foo.test.ts  |
  +----------------+       +-------------------+
         |
         v
  +----------------+
  |  features.ts   |  (new feature flag)
  +----------------+
Enter fullscreen mode Exit fullscreen mode

Phase 4's validation deliverable shows test coverage status:

  +---------------------+   +------------------+   +--------+
  |        TEST         |   |        QA        |   |  DOCS  |
  +---------------------+   +------------------+   +--------+
  | unit:  12/12 PASS   |   | lint:    PASS    |   | README |
  | integ: 3/3  PASS    |   | build:   PASS    |   | OK     |
  +---------------------+   +------------------+   +--------+
Enter fullscreen mode Exit fullscreen mode

These diagrams aren't decoration. They're how you quickly grasp status in a terminal.

Agent Team

The essence of this skill isn't "one powerful AI" — it's an entire agent team, each with its own role:

  1. Issue Analyst analyzes the issue, reads all comments, explores the CDK codebase to understand the current design, cross-references how other modules are implemented
  2. Solution Architect takes over, proposes solutions, analyzes pros and cons
  3. You review the options and pick a direction
  4. Build Engineer gets the repo into a development-ready state
  5. Coder writes the code
  6. Tester runs the tests
  7. QA Specialist checks code style and quality
  8. Documentation Specialist fills in documentation
  9. Two Reviewers do self-review from security and regression perspectives
  10. All results are aggregated into a report for you
  11. You say go or no-go

Between approval gates, the agent team automatically builds a PR from scratch. You don't need to read all of CONTRIBUTING.md, set up the environment from zero, or memorize CDK coding patterns. But you're still responsible for the final PR — you need to check whether the approach is right, the code makes sense, and the tests are sufficient.

This isn't a PR vending machine. It's a team that handles the repetitive work for you. The decision-making stays in your hands.

Intention-Driven Development

The developer experience we're going for is intention-driven.

You express your intent, the agent analyzes the problem, explores the codebase, and proposes an approach. You say LGTM, and the code gets written, tests pass, PR gets submitted.

Intent in, PR out, CI green.

That's the goal. But I'll be honest — we're not 100% there yet.

For complex issues — changes spanning multiple modules, or cases requiring deep understanding of CDK internals — the agent still goes off track sometimes and needs human correction. For simple to medium-complexity issues, this workflow already runs smoothly. Complex cases are a work in progress.

Looking further ahead, I'm excited about developers being able to kick off this process without sitting at a computer. Send an instruction from your phone during your commute, and a cloud agent runs the entire workflow automatically. By the time you get to the office, there's a report and a PR waiting for your review. Technically feasible today, but the experience still needs polish.

From Manual to Automated, From Closed to Open

Looking back at how things have changed:

| Period | Approach | Barrier |
|--------|----------|---------|
| 2019 | Fully manual, re:Invent workshop | Very high |
| 2019–2024 | Live walkthrough videos | High |
| 2025 H2 | Kiro multi-roles + Power | Medium |
| 2026 Q1 | Agent SKILL (cross-platform) | Low |
Enter fullscreen mode Exit fullscreen mode

From being blown away at re:Invent 2019 by the fact that anyone can contribute to CDK, to building a cross-platform contribution skill in 2026 — it took seven years.

The barrier has definitely come down a lot. But I won't say it's gone — you still need a basic dev environment (Node.js, Yarn, gh CLI), you still need to understand what you're doing, and you're still responsible for the PR you submit.

What's changed is that the most painful parts — setting up the environment, finding files, memorizing patterns, writing boilerplate tests — the agent can handle those now. You get to spend your time where a human brain is actually needed: understanding the problem, making design decisions, reviewing the final result.

Give It a Try

If you want to try it, I'd suggest starting with a small, well-defined aws-cdk issue. Don't jump straight into a massive change spanning ten modules.

Installation is simple — type this in your coding agent:

"Install https://github.com/cdklabs/cdk-contribution-skill to my skills"

Then point it at an issue:

"contribute this aws/aws-cdk#12345"

The agent will start Phase 1 analysis. A few minutes later you'll see the first analysis report, telling you what the issue is about, how far the impact reaches, and how it recommends fixing it.

Looking Forward

Boris Cherny from Anthropic, said something that stuck with me: "There's a good chance by end of year people aren't using IDEs anymore."

The trend is clear: coding agents are moving from desktop IDEs to the cloud, into sandboxed environments with strong isolation. The future we see is one where developers walk away from their desktop IDE, speak out their intention, and a remote coding agent in the cloud turns that intention into reality.

You might be driving, waiting for an Uber, going through airport security, on a plane with terrible WiFi, playing Nintendo Switch, or even at a BBQ. The only dev tool you need is a phone and Discord — or whatever IM you already use. You no longer need to sit in front of a desktop IDE. You are building anywhere, anytime, with the coding agent on the cloud.

SKILL is just the start of that journey.

It gives us a way to standardize and package an engineering workflow so it can be reused across tools and platforms. Today it's CDK contribution. Over time, the same pattern could be applied to other open source workflows.

This road is just beginning.

And I'd love to share everything I've learned in this journey in this series of blog posts. So stay tuned! This is going to be an amazing journey!

Questions or feedback? Poke me on X @pahudnet.

cdk-contribution-skill is open source: github.com/cdklabs/cdk-contribution-skill

Opinions expressed here are my own.

This blog post was made by the intent from Pahud Hsieh and co-authored by Kiro, Claude Code, Codex, and Gemini.

Top comments (0)