DEV Community

Kiro Powers: Give Your AI Agent Superpowers — Not Context Overload

How Kiro Powers bring on-demand expertise to the AI-Driven Development Lifecycle — demonstrated through four independent real-world use cases.

Table of Contents

  1. Introduction: The Context Problem in AI-Assisted Development
  2. What Is the AI-Driven Development Lifecycle (AI-DLC)?
  3. Enter Kiro Powers: On-Demand Expertise for AI Agents
  4. Kiro Powers Across the AI-DLC Phases
  5. Four Powers, Four Real-World Use Cases
  6. How Powers Fit Into Your Developer Workflow
  7. Building and Sharing Your Own Powers
  8. Conclusion: The Future of Agent Capabilities

1. Introduction: The Context Problem in AI-Assisted Development

You're building a checkout flow. You've used Stripe before, but you're still hunting through docs for the right pattern. Should you use idempotent keys here? What's the best way to handle webhooks? Your AI coding assistant should give you instant access to that framework expertise so you can ship faster. But here's the reality: without built-in knowledge, today's AI agents guess and iterate — just like you do.

This is the context problem in AI-assisted development, and it manifests in two ways:

Without framework context, agents guess. Your agent can call APIs, but does it know the right patterns and best practices? Does it understand connection pooling for serverless? Without built-in expertise, both you and your agent are manually reading documentation and refining approaches until the output is right. This trial-and-error repeats for every tool, every framework, every domain outside your core expertise.

With too much context, agents slow down. MCP (Model Context Protocol) servers were meant to solve this. But connect five MCP servers and your agent loads 100+ tool definitions before writing a single line of code. Five servers might consume 50,000+ tokens — 40% of your context window — before your first prompt. More tools should mean better results, but unstructured context overwhelms the agent, leading to slower responses and lower quality output. This is what the community calls context rot.

What if there was a way to give your AI agent exactly the expertise it needs, exactly when it needs it — and nothing more?


2. What Is the AI-Driven Development Lifecycle (AI-DLC)?

Before we dive into the solution, let's set the stage. AWS has introduced the AI-Driven Development Lifecycle (AI-DLC), a methodology that positions AI as a central collaborator — not just an assistant — throughout the entire software development process.

AI-DLC operates on two dimensions:

  • AI-Powered Execution with Human Oversight: AI creates detailed work plans, seeks clarification, and defers critical decisions to humans. Only humans possess the contextual understanding and knowledge of business requirements needed to make informed choices.
  • Dynamic Team Collaboration: As AI handles routine tasks, teams unite in collaborative spaces for real-time problem solving, creative thinking, and rapid decision-making.

The lifecycle flows through three phases:

Phase What Happens AI's Role
Inception Business intent → detailed requirements, stories, and units AI transforms intent into specs; team validates through "Mob Elaboration"
Construction Validated context → architecture, code, and tests AI proposes solutions; team clarifies technical decisions through "Mob Construction"
Operations Accumulated context → infrastructure and deployments AI manages IaC and deployments with team oversight

Each phase provides richer context for the next, enabling AI to deliver increasingly informed suggestions. Traditional sprints are replaced by "bolts" — shorter, more intense work cycles measured in hours or days rather than weeks.

The key insight: AI-DLC requires AI agents that can dynamically access specialized knowledge across design, development, deployment, and observability. That's exactly what Kiro Powers delivers.


3. Enter Kiro Powers: On-Demand Expertise for AI Agents

Kiro Powers provide a unified approach for a broad range of development and deployment use cases: MCP tools and framework expertise — packaged together and loaded dynamically.

Think of it like Neo downloading martial arts in The Matrix. Powers give the Kiro agent instant access to specialized knowledge for any technology. The key difference from traditional MCP? Dynamic context loading.

3.1 The Problem Powers Solve

Here's a visual comparison of the traditional approach vs. Powers:

Traditional MCP (Everything Loaded at Once):

User starts task: "Add a database on Supabase"

Agent Context:
  ├── Figma MCP ........... 10+ tools loaded
  ├── Supabase MCP ........ 50+ tools loaded
  ├── Netlify MCP ......... 10+ tools loaded
  ├── Postman MCP ......... 80+ tools loaded
  └── Datadog MCP ......... 20+ tools loaded

  ⚠️ Context Overload: 180+ tools total
  → Slow responses, lower quality, irrelevant suggestions
Enter fullscreen mode Exit fullscreen mode

With Kiro Powers (Dynamic Loading):

User starts task: "Add a database on Supabase"

Kiro analyzes task → Which powers are relevant?

  ✅ Supabase power → ACTIVATED (tools + best practices loaded)
  ⬜ Figma power → not loaded
  ⬜ Netlify power → not loaded
  ⬜ Postman power → not loaded
  ⬜ Datadog power → not loaded

  Agent Context: Only relevant tools
  → Fast responses, high quality, focused suggestions
Enter fullscreen mode Exit fullscreen mode

Install five powers and your baseline context usage is near zero. Mention "design" and the Figma power activates. Switch to database work and Supabase activates while Figma deactivates. Your agent only loads tools relevant to the current task.

3.2 How Powers Work Under the Hood

When you start a task, Kiro:

  1. Reads the task description from your prompt or conversation
  2. Evaluates installed powers against the task using keyword matching from each power's frontmatter
  3. Loads only relevant powers into context — their MCP tools, steering files, and best practices
  4. Deactivates irrelevant powers as you switch contexts

This means you can have dozens of powers installed without any performance penalty. They activate only when the conversation touches their domain.

3.3 Anatomy of a Power

A power is a unified bundle that includes three components:

my-power/
├── POWER.md              # Entry point — onboarding manual for the agent
├── mcp.json              # MCP server configuration (tools + connection details)
└── steering/             # Workflow-specific guides (optional)
    ├── getting-started.md
    ├── best-practices.md
    └── advanced-patterns.md
Enter fullscreen mode Exit fullscreen mode
Component Purpose
POWER.md The steering file that tells the agent what MCP tools it has, when to use them, best practices, common workflows, and troubleshooting guidance
mcp.json Connection details for the MCP server — can be local (STDIO) or remote (HTTP/SSE)
Steering files Workflow-specific guides that load on-demand. Working on RLS policies? The agent loads rls-policies.md. Writing Edge Functions? It loads edge-functions.md.

The POWER.md frontmatter defines activation keywords. For example, the Stripe power activates when you mention "payment," "checkout," "subscription," or "billing."

3.4 What Makes Powers Different

Feature Traditional MCP Kiro Powers
Tool Loading All tools loaded upfront On-demand, keyword-activated
Context Usage 50,000+ tokens for 5 servers Near-zero baseline
Best Practices Not included Packaged in POWER.md and steering files
Installation Manual JSON configuration One-click from IDE or kiro.dev
Ecosystem Find and configure individually Curated partners + community + build your own
Cross-tool Per-client configuration Building toward cross-compatibility (Cursor, Claude Code, Kiro CLI)

Launch partners include Datadog, Dynatrace, Figma, Neon, Netlify, Postman, Supabase, Stripe, and Strands Agents — with more coming from both software vendors and the open-source community.


4. Kiro Powers Across the AI-DLC Phases

Powers aren't just a developer convenience — they map directly to the AI-DLC methodology. Here's how powers serve each phase:

AI-DLC Phase Activity Relevant Powers
Inception Architecture diagrams, system design AWS Draw.io, Figma
Inception Requirements validation, API contracts Postman (spec generation)
Construction Payment integration Stripe
Construction Database setup Supabase, Neon, Aurora
Construction AI agent development Amazon Bedrock AgentCore, Strands
Construction API testing Postman
Operations Deployment Netlify
Operations Observability Datadog, Dynatrace

Each power brings domain-specific expertise that would otherwise require hours of documentation reading. The agent doesn't just have tools — it has the knowledge of how to use them correctly.


5. Four Powers, Four Real-World Use Cases

Each power below is demonstrated through an independent, real-world scenario — the kind of task you'd actually face on a production team. For each, we walk through installation, activation, the step-by-step workflow, the best practices the power enforces, and what would go wrong without it.


5.1 Use Case 1 — AWS Draw.io: Designing a Multi-Region Disaster Recovery Architecture

AI-DLC Phase: Inception | Real-world scenario: Your team has been asked to design a multi-region active-passive disaster recovery (DR) strategy for a healthcare platform running on AWS. The CTO needs a professional architecture diagram for the board presentation by end of day. Normally this takes a solutions architect half a day with Lucidchart or draw.io — hunting for the right AWS icons, aligning subnets, color-coding regions.

What the power provides:

  • Draw.io native XML format generation (.drawio files)
  • Complete AWS, Azure, and GCP cloud icon libraries with correct mxgraph shape names
  • Architecture pattern templates: three-zone DR layouts, VPC/subnet nesting, hub-and-spoke topologies
  • Four steering files: architecture-patterns.md, cloud-icons.md, style-guide.md, branding.md

Step-by-step walkthrough:

Step 1 — Install (one-click):
Open the Kiro Powers panel → search "AWS Draw.io" → click Install. No API keys, no JSON config. The power is purely documentation-driven — it teaches the agent how to produce valid draw.io XML.

Step 2 — Describe your architecture in natural language:

"Design a multi-region active-passive DR architecture diagram. Primary region is us-east-1 with an ALB, ECS Fargate cluster, Aurora PostgreSQL, and S3. DR region is eu-west-1 with Aurora read replica, S3 cross-region replication, and a standby ECS cluster. Show Route 53 health-check failover at the top, a replication arrow between Aurora instances, and a failover section at the bottom."

The power activates on keywords like "architecture," "diagram," "flowchart," and "topology."

Step 3 — The agent generates a production-quality .drawio file:

The agent leverages the power's architecture-patterns.md steering to use the Three-Zone DR Layout pattern:

Multi-Region0-dr

Every component uses the correct AWS icon from the cloud-icons.md reference:

<!-- Aurora Primary — uses the official AWS database icon -->
<mxCell id="aurora-primary" value="Aurora PostgreSQL"
  style="sketch=0;outlineConnect=0;fontColor=#232F3E;fillColor=#C925D1;
  strokeColor=#ffffff;verticalLabelPosition=bottom;verticalAlign=top;
  align=center;html=1;fontSize=12;aspect=fixed;
  shape=mxgraph.aws4.resourceIcon;resIcon=mxgraph.aws4.aurora;"
  vertex="1" parent="1">
  <mxGeometry x="250" y="300" width="78" height="78" as="geometry" />
</mxCell>

<!-- VPC container with proper subnet nesting -->
<mxCell id="vpc-primary" value="VPC 10.0.0.0/16"
  style="sketch=0;outlineConnect=0;gradientColor=none;html=1;whiteSpace=wrap;
  fontSize=12;fontStyle=0;shape=mxgraph.aws4.group;
  grIcon=mxgraph.aws4.group_vpc;strokeColor=#8C4FFF;fillColor=none;
  verticalAlign=top;align=left;spacingLeft=30;dashed=0;"
  vertex="1" parent="1">
  <mxGeometry x="40" y="100" width="450" height="350" as="geometry" />
</mxCell>
Enter fullscreen mode Exit fullscreen mode

The diagram follows AWS's official color conventions: Compute in #ED7100, Database in #C925D1, Networking in #8C4FFF, Storage in #7AA116, Security in #DD344C.

Step 4 — Open, edit, and present:
Open the .drawio file in VS Code (Draw.io extension) or at app.diagrams.net. Drag components to fine-tune layout, export to PNG/SVG/PDF for the board deck.

Best practices the power enforces:

  • Unique IDs on every mxCell (prevents rendering bugs)
  • Consistent 10-20px grid alignment for professional appearance
  • Parent hierarchy for nested elements (subnets inside VPCs)
  • Color-coded legend explaining service categories
  • Dashed arrows for failover paths, solid flex arrows for replication
  • Status indicators (green circle = active, grey = standby)

What goes wrong without the power: The agent produces generic rectangles with text labels. No AWS icons, no VPC grouping, no color conventions, no DR-specific layout patterns. You'd spend more time fixing the diagram than it would take to draw it manually.

Customization tip: Edit steering/branding.md to set your company's brand colors. Every diagram the agent generates will use your palette instead of the defaults.


5.2 Use Case 2 — Stripe Payments: Building a SaaS Subscription Platform with Usage-Based Billing

AI-DLC Phase: Construction | Real-world scenario: You're building a developer tools platform — think a hosted CI/CD service like a smaller-scale Vercel or Railway. You need tiered subscription plans (Free, Pro, Enterprise) plus usage-based billing for compute minutes that exceed the plan's included quota. This is one of the most complex Stripe integration patterns, combining Subscriptions, metered billing, Checkout, and webhooks.

What the power provides:

  • Stripe MCP server connection (https://mcp.stripe.com) for live API operations
  • stripe-best-practices.md steering file with Stripe's official integration guidance
  • Knowledge of Checkout Sessions, Payment Intents, Billing APIs, Connect, and deprecated API warnings
  • Workflow templates for one-time payments, subscriptions, refunds, and saved payment methods

Step-by-step walkthrough:

Step 1 — Install and authenticate:
Powers panel → search "Stripe" → Install. Enter your Stripe secret key (sk_test_...) when prompted. The power configures the MCP connection automatically:

{
  "mcpServers": {
    "stripe": {
      "url": "https://mcp.stripe.com"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2 — Describe the billing model:

"Build a subscription billing system with three tiers: Free (0/mo, 100 compute minutes), Pro ($49/mo, 2,000 compute minutes), and Enterprise ($199/mo, 10,000 compute minutes). Any usage beyond the included minutes should be billed at $0.02 per minute using metered billing. Use Stripe Checkout for the signup flow."

The power activates on "subscription," "billing," "payment," or "checkout."

Step 3 — The agent builds the product catalog:

The agent creates Products and Prices in Stripe following the power's guidance to use the Billing APIs for recurring revenue:

// Create the Pro plan with a base subscription price
const proProduct = await stripe.products.create({
  name: 'Pro Plan',
  description: '2,000 compute minutes included',
});

// Fixed monthly price
const proBasePrice = await stripe.prices.create({
  product: proProduct.id,
  unit_amount: 4900, // $49.00
  currency: 'usd',
  recurring: { interval: 'month' },
});

// Metered overage price — usage-based billing
const proOveragePrice = await stripe.prices.create({
  product: proProduct.id,
  currency: 'usd',
  recurring: {
    interval: 'month',
    usage_type: 'metered', // Key: enables usage-based billing
  },
  unit_amount: 2, // $0.02 per minute
  billing_scheme: 'per_unit',
});
Enter fullscreen mode Exit fullscreen mode

Step 4 — Checkout Session with multiple line items:

const session = await stripe.checkout.sessions.create({
  mode: 'subscription',
  customer: customerId,
  line_items: [
    { price: proBasePrice.id, quantity: 1 },
    { price: proOveragePrice.id }, // No quantity — metered usage reported later
  ],
  success_url: 'https://app.example.com/welcome?session_id={CHECKOUT_SESSION_ID}',
  cancel_url: 'https://app.example.com/pricing',
});
Enter fullscreen mode Exit fullscreen mode

Step 5 — Report usage for metered billing:

// At the end of each billing period (or in real-time), report compute usage
const usageRecord = await stripe.subscriptionItems.createUsageRecord(
  subscriptionItemId, // The metered line item's subscription item ID
  {
    quantity: 347, // 347 overage minutes this period
    timestamp: Math.floor(Date.now() / 1000),
    action: 'increment',
  }
);
Enter fullscreen mode Exit fullscreen mode

Step 6 — Comprehensive webhook handler:

app.post('/webhook', express.raw({ type: 'application/json' }), (req, res) => {
  const sig = req.headers['stripe-signature'];
  let event;

  try {
    event = stripe.webhooks.constructEvent(req.body, sig, webhookSecret);
  } catch (err) {
    return res.status(400).send(`Webhook Error: ${err.message}`);
  }

  switch (event.type) {
    case 'customer.subscription.created':
      // Provision the customer's plan tier and compute quota
      break;
    case 'customer.subscription.updated':
      // Handle plan upgrades/downgrades — adjust quotas
      break;
    case 'customer.subscription.deleted':
      // Revoke access, downgrade to Free tier
      break;
    case 'invoice.payment_failed':
      // Notify customer, implement grace period logic
      break;
    case 'invoice.finalized':
      // Usage-based invoice ready — log for internal billing dashboard
      break;
  }

  res.json({ received: true });
});
Enter fullscreen mode Exit fullscreen mode

Best practices the power enforces (from stripe-best-practices.md):

Practice What the power does What agents do without it
Use Checkout Sessions Always generates Checkout-based flows May use deprecated Charges API
Dynamic payment methods Omits payment_method_types, lets Stripe optimize Hardcodes ['card']
Idempotency keys Adds idempotency keys on create operations Skips them, risking duplicate charges
Webhook signature verification Always includes constructEvent with secret Parses raw JSON without verification
No API version in code Follows Stripe's versioning guidance Pins to an old API version
Sandbox-first development Uses sk_test_ keys May accidentally use live keys
Never expose secret keys Server-side only, environment variables May leak keys in client-side code

What the power prevents: The steering file explicitly blocks the agent from recommending the deprecated Charges API, the legacy Card Element, the Sources API for saving cards, or mixing Connect charge types. These are real-world mistakes that cost teams weeks of refactoring.


5.3 Use Case 3 — Postman: Contract-First API Development for a Microservices Migration

AI-DLC Phase: Construction + Operations | Real-world scenario: Your team is migrating a monolithic e-commerce backend into microservices. You have an existing OpenAPI 3.0 spec for the monolith, and you need to: (1) generate Postman collections from the spec, (2) create environment configurations for local, staging, and production, (3) add automated tests to every endpoint, and (4) set up continuous testing that runs whenever API code changes. This is a contract-first approach — the API spec is the source of truth.

What the power provides:

  • Postman MCP server (https://mcp.postman.com/minimal) with 40 tools in minimal mode (112 in full mode)
  • Tools spanning workspace management, collection CRUD, environment management, mock servers, API spec management, code generation, and test execution
  • steering.md with workflow patterns for collection generation, workspace creation, and test execution
  • Automatic hook setup for continuous testing on file changes

Step-by-step walkthrough:

Step 1 — Install and authenticate:
Powers panel → search "Postman" → Install. Set your Postman API key (generate at postman.com → Settings → API Keys). The power connects via SSE to Postman's hosted MCP server:

{
  "mcpServers": {
    "postman": {
      "url": "https://mcp.postman.com/minimal",
      "headers": {
        "Authorization": "Bearer ${POSTMAN_API_KEY}"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2 — Import the OpenAPI spec and generate collections:

"I have an OpenAPI spec at openapi.yaml for our e-commerce API. Create a Postman workspace called 'E-Commerce Microservices', import the spec, generate a collection from it, and set up local, staging, and production environments."

The power activates on "postman," "api," "testing," "collections," or "http."

Step 3 — The agent orchestrates the full setup:

// 1. Create a dedicated workspace
const { workspace } = await createWorkspace({
  workspace: { name: "E-Commerce Microservices", type: "team" }
});

// 2. Import the OpenAPI spec
const { spec } = await createSpec({
  workspaceId: workspace.id,
  name: "E-Commerce API v2",
  type: "OPENAPI:3.0",
  files: [{ path: "openapi.yaml", content: openApiContent }]
});

// 3. Generate a collection from the spec — contract-first!
const { collection } = await generateCollection({
  specId: spec.id,
  elementType: "collection",
  name: "E-Commerce API Tests"
});

// 4. Create environment configurations
const environments = {};
for (const [name, baseUrl] of [
  ["Local", "http://localhost:3000"],
  ["Staging", "https://staging-api.example.com"],
  ["Production", "https://api.example.com"]
]) {
  const { environment } = await createEnvironment({
    workspace: workspace.id,
    environment: {
      name,
      values: [
        { key: "base_url", value: baseUrl, enabled: true },
        { key: "api_key", value: `{{${name.toLowerCase()}_api_key}}`, enabled: true },
        { key: "auth_token", value: "", enabled: true }
      ]
    }
  });
  environments[name] = environment;
}

// 5. Save all IDs for future reference
// The power's steering mandates saving to .postman.json
Enter fullscreen mode Exit fullscreen mode

Step 4 — Add test scripts to every request:

The agent adds post-request test scripts that validate:

  • HTTP status codes (200, 201, 400, 404, etc.)
  • Response schema matches the OpenAPI contract
  • Response time is under acceptable thresholds
  • Required headers are present (Content-Type, correlation IDs)
  • Business logic assertions (e.g., created resource has an ID)

Step 5 — Run the collection and get a detailed report:

const results = await runCollection({
  collectionId: collection.uid,
  environmentId: environments["Local"].id
});

// The agent displays:
// ✅ GET /products — 200 OK (45ms)
// ✅ POST /products — 201 Created (120ms)
// ❌ GET /products/{id} — 404 Not Found (32ms) — Product ID not seeded
// ✅ POST /orders — 201 Created (89ms)
// ❌ PUT /orders/{id}/status — 500 Internal Server Error (210ms)
//
// 3/5 passed | 2 failures detected
// → Agent offers to investigate and fix the failing endpoints
Enter fullscreen mode Exit fullscreen mode

Step 6 — Set up continuous testing with a Kiro hook:

The power's onboarding creates a hook that triggers Postman test runs whenever API source code changes:

{
  "name": "API Postman Testing",
  "version": "1",
  "when": {
    "type": "fileEdited",
    "patterns": [
      "*.ts", "*.js", "*.py", "*.go",
      "openapi.yaml", "openapi.yml", "swagger.yaml"
    ]
  },
  "then": {
    "type": "askAgent",
    "prompt": "API source code or configuration has been modified. Retrieve the .postman.json file. If it exists, get the collection ID and run the collection, showing results and proposing fixes for any errors found."
  }
}
Enter fullscreen mode Exit fullscreen mode

Now every time you save a file, Postman tests run automatically — a continuous contract validation loop.

Step 7 — Keep spec and collection in sync:

As the API evolves, use the sync tools to maintain the contract:

// Spec changed? Sync the collection to match
await syncCollectionWithSpec({
  collectionId: collection.uid,
  specId: spec.id
});

// Collection changed (new tests added)? Sync back to spec
await syncSpecWithCollection({
  specId: spec.id,
  collectionId: collection.uid
});
Enter fullscreen mode Exit fullscreen mode

Best practices the power enforces:

Practice How the power enforces it
Store IDs in .postman.json Steering mandates saving workspace, collection, and environment IDs to a local file for reproducibility
Use environments, not hardcoded URLs Agent always creates environment variables for base_url, api_key, etc.
No curl or external clients Steering explicitly states: "You are not allowed to use curl or any other API clients except Postman"
Server must be running before tests Agent verifies the API server is up before executing collection runs
Organized folder structure Requests are grouped by resource (Products, Orders, Users) inside the collection
Post-request test scripts on every request Agent adds validation scripts, not just request definitions

What goes wrong without the power: The agent might use curl for testing (no persistence, no environments), skip test scripts, forget to save IDs, or create a flat collection without folder organization. The power brings a complete, repeatable testing workflow that survives across sessions.


5.4 Use Case 4 — Amazon Bedrock AgentCore: Building an Internal Knowledge Assistant for Enterprise

AI-DLC Phase: Construction + Operations | Real-world scenario: Your enterprise wants to build an internal knowledge assistant — an AI agent that employees can ask about company policies, HR procedures, technical documentation, and onboarding guides. The agent needs to: (1) use a foundation model via Amazon Bedrock, (2) maintain conversation memory so follow-up questions work naturally, (3) expose the agent through a managed gateway with authentication, and (4) deploy to production with proper infrastructure. This is a common enterprise pattern that AgentCore was built for.

What the power provides:

  • MCP tools: search_agentcore_docs, fetch_agentcore_doc, manage_agentcore_runtime, manage_agentcore_memory, manage_agentcore_gateway
  • Three steering files: getting-started.md (full create→dev→test→deploy workflow), agentcore-memory-integration.md, agentcore-gateway-integration.md
  • Support for multiple agent SDKs (Strands, Claude, OpenAI) and model providers (Bedrock, OpenAI)
  • Infrastructure deployment guidance via CDK or Terraform

Step-by-step walkthrough:

Step 1 — Install the power:
Powers panel → search "Bedrock AgentCore" → Install. No API keys needed at install time — the power uses your existing AWS credentials.

Step 2 — Describe the agent:

"Create a new AI agent using Bedrock AgentCore with the Strands framework. It should be an internal knowledge assistant that can answer questions about company documentation. Use Bedrock as the model provider and include conversation memory."

The power activates on keywords like "agent," "bedrock," "agentcore," or "strands."

Step 3 — Scaffold the project:

The agent follows the getting-started.md steering file, which specifies using --non-interactive mode for AI-driven workflows:

# Install the AgentCore toolkit
pip install bedrock-agentcore-starter-toolkit

# Create the agent project with defaults (Strands + Bedrock)
agentcore create --non-interactive \
  --project-name KnowledgeAssistant \
  --template basic \
  --agent-framework Strands \
  --model-provider Bedrock
Enter fullscreen mode Exit fullscreen mode

This generates:

KnowledgeAssistant/
├── src/
│   └── main.py              # Agent entrypoint with @app.entrypoint decorator
├── .bedrock_agentcore.yaml   # Runtime configuration
├── pyproject.toml            # Dependencies
└── .venv/                    # Virtual environment (auto-created)
Enter fullscreen mode Exit fullscreen mode

Step 4 — Customize the agent logic:

The agent modifies src/main.py to add knowledge retrieval capabilities:

from bedrock_agentcore import BedrockAgentCoreApp

app = BedrockAgentCoreApp()

@app.entrypoint
async def handle_request(prompt: str, session_id: str = None):
    """
    Internal knowledge assistant — answers questions about
    company policies, HR procedures, and technical documentation.
    """
    # The Strands framework handles model invocation,
    # tool orchestration, and response generation
    response = await agent.invoke(
        prompt=prompt,
        session_id=session_id,
        system_prompt="""You are an internal knowledge assistant for Acme Corp.
        Answer questions about company policies, HR procedures, benefits,
        technical documentation, and onboarding guides.
        Always cite the source document when answering.
        If you don't know the answer, say so clearly."""
    )
    return {"response": response}
Enter fullscreen mode Exit fullscreen mode

Step 5 — Start the dev server and test locally:

# Terminal 1: Start dev server with hot reloading
agentcore dev

# Terminal 2: Test the agent
agentcore invoke --dev '{"prompt": "What is our PTO policy for new employees?"}'

# Expected response:
# ✓ Response from dev server:
# {
#   "response": "According to the Employee Handbook (Section 4.2),
#    new employees receive 15 days of PTO in their first year..."
# }
Enter fullscreen mode Exit fullscreen mode

The power's steering emphasizes a critical development loop: make a change → save → dev server auto-reloads → test with agentcore invoke --dev → verify → repeat.

Step 6 — Add conversation memory:

The agent uses the manage_agentcore_memory MCP tool to understand memory configuration:

# .bedrock_agentcore.yaml — updated configuration
memory:
  mode: STM_ONLY  # Short-term memory for conversation continuity
Enter fullscreen mode Exit fullscreen mode

The power's steering includes a critical deployment ordering rule:

Deploy with NO_MEMORY first. Memory integration should be added after the initial successful deployment. The agent code can include memory session manager logic — it simply won't persist when NO_MEMORY is configured. Once the agent is running, update to STM_ONLY or STM_AND_LTM and redeploy.

This prevents a common failure mode where developers try to deploy with memory enabled before the memory resources exist.

Step 7 — Configure the gateway for authenticated access:

The agent uses the manage_agentcore_gateway MCP tool to set up a managed API endpoint:

# The MCP tool provides the exact CLI commands needed
agentcore gateway create \
  --name knowledge-assistant-gateway \
  --auth-type IAM

# Add the agent as a target
agentcore gateway add-target \
  --gateway-name knowledge-assistant-gateway \
  --agent-name KnowledgeAssistant
Enter fullscreen mode Exit fullscreen mode

This creates a managed HTTPS endpoint with IAM authentication — employees access the agent through your company's SSO.

Step 8 — Deploy to production:

# Configure the deployment entrypoint
agentcore configure --entrypoint src/main.py --non-interactive

# Deploy to AgentCore runtime (initially without memory)
agentcore launch

# Verify deployment
agentcore status

# Test the deployed agent
agentcore invoke '{"prompt": "How do I request a hardware upgrade?"}'

# Once confirmed working, enable memory and redeploy
# Update .bedrock_agentcore.yaml: memory.mode → STM_ONLY
agentcore launch

# When done testing, clean up
agentcore stop-session    # Free active session resources
agentcore destroy --dry-run  # Preview what will be deleted
agentcore destroy            # Remove all resources
Enter fullscreen mode Exit fullscreen mode

Best practices the power enforces:

Practice How the power enforces it
Non-interactive mode for AI workflows Steering specifies --non-interactive flag — prevents the agent from hanging on interactive prompts
Memory deployment ordering Steering mandates NO_MEMORY first, then upgrade after successful deployment
Test after every change Steering marks this as "CRITICAL" — always run agentcore invoke --dev after code changes
Use manage_agentcore_* MCP tools first Steering says "DO NOT attempt to manually configure memory/gateway without first consulting this tool"
Existing agent protection Steering warns: "agentcore create is ONLY for new projects — using it on an existing agent will overwrite your code"
Entrypoint auto-detection Dev server reads .bedrock_agentcore.yaml for the entrypoint, falls back to src.main:app

What goes wrong without the power: Developers commonly try to deploy with memory enabled before resources exist (deployment fails), use agentcore create on an existing project (code overwritten), skip local testing (bugs discovered only in production), or manually configure gateways without understanding the required CLI sequence. The power's three steering files (getting-started.md, agentcore-memory-integration.md, agentcore-gateway-integration.md) prevent all of these pitfalls.


6. How Powers Fit Into Your Developer Workflow

Here's how to think about powers in your daily workflow — each power slots into the moment you need it, regardless of the project:

Architecture review → Inception phase:

"Design a multi-region DR architecture for our healthcare platform"
→ Draw.io power activates, produces a .drawio file with proper AWS icons and DR patterns

Feature development → Construction phase:

"Add usage-based subscription billing with Stripe"
→ Stripe power activates with best practices — Checkout Sessions, metered billing, webhook verification

"Build an internal knowledge assistant with Bedrock AgentCore"
→ AgentCore power activates with the full create→dev→test→deploy workflow

Quality assurance → Construction + Operations phase:

"Import our OpenAPI spec into Postman and generate test collections"
→ Postman power activates, creates workspace, generates collections, sets up environments

"Run all API tests before we merge"
→ Postman power runs collections, reports pass/fail by endpoint, offers to fix failures

Deployment → Operations phase:

"Deploy the knowledge assistant to AWS"
→ AgentCore power guides through agentcore launch with memory ordering best practices

The beauty is that you never manage context manually. You just describe what you need, and the right power activates with the right tools and the right knowledge. When you switch from payments to testing, Stripe deactivates and Postman activates. Your context window stays clean and focused.

Pro tips for power users:

  • Install powers proactively — even if you don't need them today. They cost zero context until activated.
  • Let keywords do the work — you don't need to explicitly say "use the Stripe power." Just mention "payment" or "checkout" and it activates.
  • Leverage steering files — powers often include multiple steering files for different workflows. The agent loads the right one based on your current task.
  • Use hooks — powers like Postman can set up hooks that automatically run tests when you edit code, creating a continuous testing loop.

7. Building and Sharing Your Own Powers

Powers aren't just for consuming — you can build your own. This is especially powerful for teams with internal tools, custom frameworks, or domain-specific knowledge.

A power is structured as:

my-custom-power/
├── POWER.md              # Agent instructions, workflows, best practices
├── mcp.json              # MCP server config (optional)
└── steering/             # Additional workflow guides (optional)
    └── my-workflow.md
Enter fullscreen mode Exit fullscreen mode

The POWER.md is the heart of the power. It tells the agent:

  • What tools are available and when to use them
  • Best practices and anti-patterns to avoid
  • Common workflows with step-by-step instructions
  • Troubleshooting guidance for common errors

Sharing options:

  • GitHub — Import powers from any GitHub URL
  • Local directories — Point to a local folder for private team powers
  • Kiro marketplace — Publish for the community

Example use cases for custom powers:

  • Your company's design system and component library
  • Internal API standards and code review guidelines
  • Infrastructure-as-code patterns specific to your organization
  • Domain-specific knowledge (healthcare compliance, financial regulations, etc.)

8. Conclusion: The Future of Agent Capabilities

Kiro Powers represent a fundamental shift in how AI agents acquire and use knowledge. Instead of loading everything upfront and hoping for the best, powers enable continual, on-demand learning — the agent downloads exactly the expertise it needs, when it needs it.

This matters for three reasons:

  1. For developers: You work faster in unfamiliar domains. Need to integrate Stripe? The agent already knows the best practices. Need to build an AI agent? The agent already knows the deployment workflow. You focus on business logic; the power handles the framework expertise.

  2. For tool providers: You package your expertise once and it works everywhere. No more maintaining separate documentation for every AI tool. One POWER.md, one set of steering files, and your users get guided experiences across any compatible IDE.

  3. For the AI-DLC methodology: Powers are the mechanism that makes AI a true collaborator across all phases — from inception (DR architecture diagrams with Draw.io) through construction (SaaS billing with Stripe, contract-first testing with Postman, enterprise agents with AgentCore) to operations (deployment, observability). Each power brings domain expertise that would otherwise require hours of documentation reading.

The vision is clear: AI agents that don't just have tools, but have the wisdom to use them correctly. Not by knowing everything upfront, but by learning what they need, when they need it, and continuously expanding their expertise as the tools around them evolve.

Get started today:


Kiro Powers are available today in Kiro IDE, with cross-compatibility for other AI development tools coming soon. Launch partners include Datadog, Dynatrace, Figma, Neon, Netlify, Postman, Supabase, Stripe, and Strands Agents.


References:

Top comments (0)