A few weeks ago, an opportunity came up for my team to collaborate with our datascience team on a small prototype: building and deploying an MCP server. Nothing massive. No big roadmap commitment. Just a chance to explore something new and see if it could be useful.
I took it. Not only because the topic was interesting, but because it gave me an excuse to do something I don’t get to do often anymore: get my hands dirty again.
I wanted to learn how MCP servers actually work, what capabilities AWS AgentCore offers (and what / alternatives exist) and whether this whole AI-native development hype holds outside inspirational talks and LinkedIn influencers' posts.
It also felt like a good opportunity to experiment with an AI-native SDLC in practice, not just as a concept, but as a way of working.
I used AI to structure my learning, bootstrap experiments, and iterate on ideas quickly. Partly for myself, and partly to understand how this could help drive AI adoption more broadly across our organisation.
This experience sparked two ideas:
- This would make a good conference talk
- I should write about it while it’s still fresh, messy, and honest and create a serie of posts where I learn in public.
The Talk (Rough, Temporary Abstract)
Vibecoding in Between Meetings: Building & Deploying MCP Servers with Kiro and Agent Core Runtime
Engineering Managers and Staff+ engineers often share the same fate: back-to-back meetings while the builder inside silently screams. This talk explores how adopting an AI-Native SDLC helped me reclaim my building time—allowing me to learn a new topic, structure my exploration, and build a proof of concept in the gaps between meetings.
We’ll go from zero to deployed: starting with a simple stdio MCP server, evolving it into an HTTP MCP server, validating everything with the MCP UI Inspector, and finally deploying it on AWS using Agent Core Runtime. Along the way, we’ll lean on Kiro agents, personas, and prompts as well as MCP servers, to accelerate learning loops, automate documentation, and capture architectural decisions.
I’m hoping to bring this talk to a conference sometime next year. This series is, in many ways, the raw material for it.
Where We Are Right Now
We’ve just wrapped up phase one of the proof of concept:
- A first stdio MCP server
- its HTTP version, run locally on docker and tested with MCP Inspector
- then deployed on our AWS infrastructure with AgentCore Runtime
That’s a solid foundation—but there’s a lot we haven’t explored yet. AgentCore has more components we need to understand. Costs and operational pitfalls need to be evaluated. And most importantly, we need to figure out how (or if) this fits into our actual product.
Which is why this series isn’t a fixed plan.
What’s Coming (Subject to Change)
Here’s the rough structure I’m starting with. The content and even the number of posts may change as development continues over the next month or two.
- From “No Time to Build” to an AI-Native SDLC
- Getting Started with Kiro: Setup, Powers, and Steering the AI
- MCP Servers Explained: The Missing Layer Between AI and Your Systems
- Vibecoding a First MCP Server: Building a stdio MCP for Internal Use
- From stdio to HTTP: Evolving an MCP Server with FastMCP
- From Local to the Cloud: Running MCP Servers on AgentCore Runtime
- Beyond the Runtime: Gateway, Identity, and Memory in Agent Core
Some posts may split. Others may merge. New ones might appear as we uncover things we didn’t expect especially around integration, security, and cost.
If you’re an Engineering Manager, a Staff+ engineer, or anyone trying to stay technically sharp despite spending your days trapped in meetings, I hope you find this series interesting and useful. My goal isn’t to present perfect solutions: it’s to show how (if?) AI can really become a Multiplier - helping us learn faster, validate earlier, and building again, even when time is scarce.
Let’s see where this goes.

Top comments (0)