I had a Python Lambda project sitting in a repo with zero CI/CD. No pipeline, no automation, just sam deploy from my laptop like it's 2019.
So I used Kiro to generate a complete GitHub Actions pipeline — build, test, deploy — and watched it go green on the first push. Okay fine, the third push. But who's
counting.
Here's exactly how I did it.
The Setup
The project is simple on purpose. A Python Lambda function behind API Gateway, deployed with AWS SAM:
├── src/
│ └── app.py # Lambda handler
├── tests/
│ └── test_app.py # pytest unit tests
├── template.yaml # SAM template
├── samconfig.toml # SAM deploy config
└── .kiro/skills/
└── github-actions-cicd/ # Kiro skill
The Lambda handler uses Powertools for AWS Lambda (Python) — a developer toolkit that gives you structured JSON logging, tracing, and metrics out of the box. I'm only using the Logger here, which gives me structured logs with correlation IDs and cold start tracking with a single decorator:
from aws_lambda_powertools import Logger
logger = Logger()
@logger.inject_lambda_context
def lambda_handler(event, context):
logger.info("Root endpoint called")
# ... route handling
Three pytest tests cover the routes. Minimal, but the focus here is the pipeline, not the app.
Teaching Kiro How I Want My Pipelines Built
Before generating the workflow, I asked Kiro to create a Skill — a reusable instruction package that tells Kiro how to do something, not just what to do. Skills follow the open Agent Skills standard, so they're shareable across tools and teams.
Kiro uses progressive disclosure with skills — it loads only the name and description at startup, activates the full instructions when your request matches, and pulls in reference files only when needed. No wasted tokens in the context window.
"Create a Kiro skill for generating GitHub Actions CI/CD workflows for Python Lambda projects using SAM. It should use three jobs: test, build, deploy. Use OIDC for AWS auth, pip caching, and GitHub Environments."
Kiro generated .kiro/skills/github-actions-cicd/SKILL.md:
---
name: github-actions-cicd
description: Generate GitHub Actions CI/CD workflows for Python Lambda
projects using SAM. Use when creating pipelines, adding CI/CD,
generating workflow YAML, or deploying Lambda functions with
GitHub Actions.
---
## Workflow Structure
Generate GitHub Actions workflows with three jobs:
1. **build** — Run `sam build` to compile the SAM application
2. **test** — Install dependencies and run `pytest`
3. **deploy** — Run `sam deploy` to deploy to AWS (only on push to `main`)
## AWS Authentication
Always use OIDC for AWS credentials via
`aws-actions/configure-aws-credentials`. Never use static access keys.
## Caching
Use `actions/cache` for pip dependencies.
## Environment Strategy
- `production` environment: deploys on push to `main` branch
- Use GitHub Environments for secrets and protection rules
I also added reference files under references/ with specific patterns for SAM deploys, matrix builds, and security hardening. Kiro loads these only when the instructions point to them — keeping things lean.
The skill is committed to the repo, so anyone who clones it gets the same behavior. That's the power of skills — they're portable and version-controlled.
Full docs on skills: kiro.dev/docs/cli/skills
The Generated Workflow
With the skill in place, I asked Kiro to generate a GitHub Actions workflow. Here are some of the prompts I used:
"Generate a GitHub Actions CI/CD workflow for this project"
That gave me the base pipeline. Then I iterated:
"Add pip caching to speed up the test job"
"Add a matrix build to test on Python 3.11 and 3.12"
"Split the deploy into its own job that only runs on push to main, using OIDC for AWS auth"
The skill activated automatically each time — no special command needed — and Kiro followed the conventions I defined. Here's the structure of the final workflow (full YAML in the repo):
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12"]
steps:
# checkout, setup python, pip cache, install deps, pytest
build:
needs: test
steps:
# checkout, setup SAM, sam build, upload artifact
deploy:
needs: build
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
environment: production
permissions:
id-token: write
contents: read
steps:
# checkout, download artifact, setup SAM, configure AWS creds (OIDC), sam deploy
Let me break down what's happening.
Walking Through the Pipeline
Job 1: Test
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12"]
Runs on every push and PR. The matrix build tests against Python 3.11 and 3.12 in parallel — so you catch version-specific issues early.
Pip caching uses actions/cache keyed on the OS, Python version, and a hash of requirements.txt. Second runs are noticeably faster.
Job 2: Build
build:
needs: test
Only runs after tests pass. Runs sam build and uploads the .aws-sam/ output as a GitHub artifact. The deploy job downloads this artifact instead of rebuilding — keeping the deploy fast and ensuring you deploy exactly what you built.
Job 3: Deploy
deploy:
needs: build
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
environment: production
permissions:
id-token: write
contents: read
This is where it gets interesting. The deploy job:
-
Only runs on push to
main— PRs trigger test + build but never deploy -
Uses a GitHub Environment (
production) — you can add protection rules like required reviewers - Authenticates via OIDC — no static AWS keys anywhere
Setting Up OIDC Authentication (No Static Keys)
This is the part most people skip or get wrong. Instead of storing AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as GitHub secrets (please don't), we use OpenID Connect to get short-lived credentials.
Here's the setup:
1. Create the OIDC identity provider in IAM:
aws iam create-open-id-connect-provider \
--url https://token.actions.githubusercontent.com \
--client-id-list sts.amazonaws.com \
--thumbprint-list 6938fd4d98bab03faadb97b34396831e3780aea1
2. Create an IAM role with a trust policy scoped to your repo + environment:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
"token.actions.githubusercontent.com:sub": "repo:<OWNER>/<REPO>:environment:production"
}
}
}
]
}
The sub condition is critical — it locks the role down to your repo's production environment. Nothing else can assume it.
3. Add the secret and variable in GitHub:
gh secret set AWS_ROLE_TO_ASSUME --body "arn:aws:iam::<ACCOUNT_ID>:role/GitHubActions-KiroDemo"
gh variable set AWS_REGION --body "us-east-1"
Notice AWS_ROLE_TO_ASSUME is a secret (sensitive) while AWS_REGION is a variable (not sensitive).
Full guide: Configuring OpenID Connect in AWS
The Result
Push to main, and the pipeline runs:
✅ test (3.11) — 8s
✅ test (3.12) — 11s
✅ build — 18s
✅ deploy — deploys to AWS via SAM
Three jobs, zero static credentials, matrix testing, pip caching, artifact passing between jobs. All generated by Kiro with a skill that encodes how I want my pipelines built.
What I'd Add Next
The pipeline is intentionally simple, but here's what you could ask Kiro to add:
-
Multi-environment deploys — staging on
develop, production onmain, each with their own GitHub Environment and IAM role -
Linting — add
rufforflake8as a step in the test job -
SAM validate — run
sam validatein the build job beforesam build - Notifications — Slack or Teams notification on deploy success/failure
Each of these is a single prompt to Kiro — and because the skill is in place, it'll follow the same conventions every time.
Try It Yourself
The full repo is here: github.com/habibmasri/kiro-cli-github-actions
Clone it, install Kiro CLI, and try asking it to modify the pipeline. The skill is included — it'll just work.

Top comments (0)