DEV Community

Rak
Rak

Posted on

AI sped up development, not shipping

AI can generate Terraform, but generating it was never the hard part — trusting and deploying it is.

AI made me faster at writing code, not shipping it

The way I build software has changed more in the last year than in the decade before it. Take a decision like whether to process a new order with a background job or a webhook. Before AI tooling, the workflow looked something like this: identify the problem, spend time researching the tradeoffs, check how others had solved it in similar systems, form a plan, write the code, test it locally, then do everything needed to actually ship it. The whole thing, from identifying the problem to having it running in production, might take a couple of days or weeks depending on the feature.

Now I give an agent the context it needs, the codebase, the outcome, the constraints, and each option comes back as working code (usually) in minutes. The research, the planning, the implementation are compressed. We’re now looking at hours not days, and this is including the time it takes to review the result and wrestle with the AI to get to the end state that I had in mind.

That's not unique to me, I see it both in my organization and amongst my dev friends, AI is fundamentally changing the first half of the journey from problem to production. What hasn't changed much, is everything that happens after the code is written.

Every application has a shadow project

Behind every application in production there's a second project that nobody outside the immediate work ever sees. It's the infrastructure-as-code that provisions your databases and caches, the CI/CD pipeline config that builds and ships the containers, the Dockerfile, and sets the environment variables across staging and prod.

The two projects depend on each other but move independently. Add a service and the deployment project needs matching infrastructure. Add a queue and the pipeline config needs updating. Even something as trivial as changing a port can require an update to the infrastructure configuration.

During my review of the background job I found the performance wasn't good enough, fed that back to the agent, and it came back with a revised approach using a Redis cache. Once I had Redis running locally via Docker it worked perfectly, but from where the agent sits, the application code is the whole picture. It has no idea that Redis now needs to be provisioned, that the pipeline config needs an environment variable, or that staging and prod need to stay in sync.

Keeping the two projects in sync was already one of the harder parts of backend development. Teams hired platform engineers to manage it, but it remained and it was still a common source of production incidents. AI didn't create the problem, it just made it untenable. The problem now is that an agent can introduce new infrastructure requirements every time I prompt it, and it has no idea the shadow project exists.

Just generate the Terraform at the same time

The obvious response is to have the agent generate the deployment project alongside the application code. When the agent adds Redis, have it also write the Terraform module, the pipeline config, and the environment variables at the same time so both projects stay in sync.

The thing is, generating Terraform isn't the hard part. It's really not that different from leveraging an old project or selecting from a library of templates. The hard part is trusting it.

When an agent writes application code, there's a wealth of open source references, community patterns, and established conventions to validate against. IaC is different. It's privately held, idiosyncratic, and often tailored specifically for the application it's coupled to. There's no easy way to know whether what you're looking at reflects best practice or just how the last person happened to solve it. A misconfigured IAM policy or an incorrectly sized database doesn't throw an exception or print a stack trace. It either silently costs money or takes down production in a way that's genuinely difficult to debug.

Whether you write it yourself or generate it, someone has to review it before it touches production. That means verifying IAM policies aren't overly permissive, checking security groups are scoped correctly, confirming instance sizing makes sense for the workload, and running static analysis on the output before applying through staging and production. The work shifts from writing and reviewing Terraform to reviewing AI-generated Terraform, but you didn't write it, so you have to check everything with a fine-toothed comb.
Terraform wasn't built for this

I want to be clear that this isn't unique to Terraform. The same arguments apply to pretty much all IaC tools. The workflow Terraform enforces exists for good reason: plan, review, approve, apply, verify, repeat across environments. Infrastructure mistakes are expensive and sometimes irreversible, and when you're exposing your entire cloud provider's API surface including every IAM policy, every network rule, and every storage configuration, that level of scrutiny is genuinely warranted. The problem is that workflow was designed for infrastructure that changes infrequently and deliberately. It assumes the person writing the Terraform understands the full context of what they're configuring.

AI-assisted development breaks both of those assumptions at once. Infrastructure changes aren't just new resources, they're configuration updates, scaling adjustments, environment variables, and things being removed. During early development and every time a feature is added or removed, the rate of change spikes. Those are exactly the moments when you're moving fastest and leaning on an agent the most.

Think about what just happened with Redis, with just one performance issue during an iteration with your AI and the codebase had a new infrastructure dependency. That wasn't a deliberate decision made in a planning meeting, it was a side effect of a five minute conversation. The person reviewing is no longer reading decisions they made themselves, they're auditing a stream of generated output that has the potential to change the infrastructure in ways that are easy to miss and expensive to get wrong. The review process exists to catch exactly those mistakes, but it wasn't designed to run at this pace. So I'm either keeping up thorough reviews and becoming the bottleneck, or skipping them and accepting the risks.

I just want to show off my app

Frontend teams went through this transition years ago and it wasn't just that Vercel was more convenient. It's that the pace of frontend development made the old model impractical. Frameworks were proliferating, build tooling was changing constantly, and the effort required to keep Nginx configs, SSL certs, and CI/CD pipelines current with all of it had quietly become a job in itself.

When I first used Vercel for a frontend project, what struck me most was the ability to show someone what I'd built straight away, without having to configure a million things and hoping everything was right. Even if there were a few kinks to work out like bot protection or firewall rules, they could be resolved in minutes.

Teams that moved to Vercel didn't do it because they couldn't manage their own infrastructure. Many of them were very good at it. They did it because the return on that investment had collapsed.

Push to production

There are a few ways to close the gap between writing code and shipping it. Some teams that I have spoken to are adapting their review processes with lighter touch on low-risk changes and stricter gates before production. Others are looking to build or further adapt their internal platforms to encode guardrails and reduce per-change overhead. Both are reasonable, but neither gets you out of the review cycle; they just make it more manageable.

There's a specific feeling most backend developers rarely get to experience: finishing something and being able to share it immediately.

That's what we've been working towards with Suga, and it could be an interesting option for you if you aren’t looking to maintain a shadow project. The review, the auditability, the environment parity still matter, the question is whether they have to come bundled with a separate repo, a pile of YAML, and a review workflow that wasn't designed for the pace we're now working at.

Suga Canvas

Connect your repository, define your services on a canvas, and a push to your branch handles the rest.

Top comments (0)