DEV Community

Bram Verhagen
Bram Verhagen

Posted on

How I accidentally built a production tool with Amazon Q and stubbornness

I am not a developer. I want to be upfront about that.

During my bachelor's degree I took a few courses in C, C++ and Java. I understood enough to pass. Later in my career I did some self-study in Python and Terraform, enough to read code, follow what it does and occasionally hack something together. But writing software has never been my job, and I have never pretended otherwise.

I am an infrastructure architect and cloud consultant. I think in systems, not in syntax. I know what needs to happen; I just rarely write the thing that makes it happen.

Which made what I did next slightly out of character.

The problem: syncing Docker images from Artifactory to AWS ECR

The situation was straightforward enough, at least on paper.

A supplier was delivering a third-party application we needed to host in AWS. They provided their Docker images through a JFrog Artifactory instance. Our workloads ran in AWS, which meant we needed those images in Amazon ECR, Amazon's own container registry.

There were two concrete reasons for this. First, good CI/CD practice: pulling images directly from a supplier's Artifactory during a deployment pipeline introduces an external dependency you do not control. If their registry is unavailable, your pipeline fails. Second, we needed the images in ECR to run Amazon Inspector and Shield Advanced scanning with our own company configuration. You cannot point those tools at an external registry.

So the requirement was clear: when a new image appears in Artifactory, it should automatically show up in ECR. Sounds simple.

It is not. Behind that single sentence sits a surprisingly awkward set of problems. You need to authenticate to two different systems with entirely different credential models: Artifactory tokens on one side, AWS IAM and Secrets Manager on the other. You need to compare what exists in each registry without pulling every image to find out. You need to handle image tags correctly, filter out what you do not need, avoid re-uploading layers that already exist in ECR and do all of this reliably on a schedule, without managing any infrastructure.

I looked for an existing tool that handled this combination. Nothing quite fitted. So I decided to build one.

Enter vibe coding

So I opened Amazon Q Developer and started describing my problem.

Vibe coding, if you have not come across the term, is roughly what it sounds like. Instead of writing code yourself, you describe what you want to an AI coding assistant. It writes the code, explains what it has done and you iterate from there. Back and forth, refining and correcting, until you have something that works. You are the architect; the AI is the developer.

In practice, it felt surprisingly natural for someone with my background. I could describe the problem in infrastructure terms: registries, authentication, layers, manifests, schedules. Amazon Q translated that into working Terraform and Python. When I did not understand something it had written, I asked. It explained. When the output was not quite right, I pushed back and we tried again.

I want to be honest, though: it is not magic. Amazon Q is a capable assistant, but it has opinions, blind spots and a habit of steering you towards certain solutions whether or not they are the right ones for your situation. It also, occasionally, makes things up. More on both of those shortly.

Working with Amazon Q: the good and the frustrating

Working with Amazon Q felt productive from the start. It understood the problem quickly, asked sensible clarifying questions and got me to a first working prototype faster than I had any right to expect. For someone who does not write code for a living, that early momentum matters.

But it did not take long to run into the limitations.

The most instructive example was the question of how to actually move images between registries. There are two fundamentally different approaches to this.

The first is to use the Docker CLI. You authenticate to both registries, run docker pull to download the image, docker tag to relabel it and docker push to upload it to ECR. It works. Most developers know it. It is the obvious answer if you think about the problem in familiar terms.

The second approach is to bypass the Docker CLI entirely and use the Docker Registry V2 API together with the AWS SDK. Instead of pulling the full image to disk, you fetch the image manifest, transfer the individual layers directly between registries via HTTP and push the manifest to ECR using boto3. No Docker installation required. No large temporary files on disk. Faster, leaner and far better suited to running inside a Lambda function.

The second approach is clearly the right one for this use case. A Lambda function has no Docker daemon. Installing and running the Docker CLI inside Lambda is possible, but it is the wrong solution to the right problem.

Amazon Q understood this in principle. When I explained the constraints: serverless, no Docker daemon, Lambda environment, it would agree and produce code using the API approach. Good. But then, a few exchanges later, it would quietly drift back. A new function would appear that shelled out to docker pull. A suggestion would sneak in to add Docker as a dependency. Without any fanfare, the Docker CLI approach would be back on the table.

I lost count of how many times I had to steer it back. Not because Amazon Q was wrong about Docker, the CLI approach does work, but because it kept defaulting to the familiar pattern rather than holding the constraint in mind.

This taught me something useful: vibe coding requires genuine engagement. You cannot simply accept what the AI produces. You need enough understanding of the problem to recognise when the output is technically correct but contextually wrong. The AI brings the syntax; you still have to bring the judgement.

The solution: what the tool actually does

So what did I actually build?

The tool is a Python Lambda function, deployed with Terraform, that runs on a schedule via Amazon EventBridge. It authenticates to Artifactory using credentials stored in AWS Secrets Manager and accesses ECR through an IAM role, with no static AWS credentials anywhere near the code.

On each run it does something conceptually simple but practically fiddly: it queries Artifactory for the images and tags that are available, then checks ECR to see what is already there. The function transfers only the delta. If a layer already exists in ECR, it skips it. If a tag has not changed, it skips that too. The goal was to make it cheap to run frequently, not just correct when it runs.

Filtering is available but opt-in. If you want to narrow down which images and tags get synced, you can provide substring filters. Anything containing that string gets included; everything else is skipped. If you do not configure any filters, everything in the repository is synced. It is simple and it works, though it is worth knowing that it uses substring matching rather than anything more sophisticated.

The whole thing is open-sourced and available on GitHub:
artifactory_ecr_sync

I want to be clear about what it is and what it is not.

It is not a polished, enterprise-grade product. The code is functional, but a non-developer wrote it with AI assistance and the code quality and conventions reflect that. Do not expect clean abstractions, rigorous error handling throughout or code that would sail through a professional review.

What it is: a tool that solves a specific problem, offered as-is to anyone who faces the same one. If it needs adapting for your environment, the code is there to read and modify. I provide no support and make no promises.

From experiment to production

It works. Probably.

The tool has been running in a real production environment since I finished building it. As far as I can tell, it has been reliable. When new images show up in Artifactory, they appear in ECR. The Lambda runs on schedule without complaint. I have not hit any show-stopping issues since the initial round of testing and fixing.

That said, I want to be direct about what this actually is: a personal experiment to learn more about vibe coding. Not a supported product, not a hardened solution and not something I am actively developing or maintaining.

I provide no guarantees, warranties or assurances of any kind. If you use it, you do so entirely at your own risk. Please read the code before deploying it anywhere. It is not long and it is not complicated. It works for me, in my environment. Your mileage may vary.

What I learned – and what this means for (non-)developers

So, should you try it?

Vibe coding genuinely lowers the barrier to building software. That is not a small thing. Problems that would previously have required finding a developer, writing a brief, waiting for a sprint slot and iterating over weeks can now be explored in an afternoon. For someone like me, technical enough to understand the problem but not a developer, that is a meaningful change.

But it does not eliminate the need for judgement. If anything, it makes judgement more important. The Docker CLI example is instructive: Amazon Q never produced code that was outright broken, but it repeatedly produced code that was wrong for the context. Catching that required understanding the problem deeply, not just accepting output that looked reasonable.

The hallucinations matter too. Amazon Q occasionally produced code with confident references to SDK methods that did not behave as described, or subtle logic errors that only surfaced during testing. The fix is straightforward: test everything and read the actual documentation when something does not work. But it does mean you cannot be passive.

This has implications beyond non-developers like me. For professional developers, AI increasingly handles the heavy lifting. Boilerplate, scaffolding, routine implementation: Amazon Q is genuinely good at all of it. What that leaves is the work that always required a skilled developer: understanding the real constraints, making architectural decisions, recognising when a technically correct solution is contextually wrong and knowing which questions to ask in the first place.

The creativity, the critical thinking and the judgement are still entirely human. What changes is where developers spend their time, and that means the skills worth developing are shifting too. Less syntax, more systems thinking. Less implementation, more direction.

If you have a specific, well-defined problem and no developer available, vibe coding is worth trying. The tools are capable. The main constraint is your own clarity about what you actually want.

Go build something.

Top comments (0)