DEV Community

Cover image for Dispatch From the Other Side: Don't Let the Gap Grow
Anthony Barbieri
Anthony Barbieri

Posted on

Dispatch From the Other Side: Don't Let the Gap Grow

This is the fourth and final post of the Dispatch From the Other Side series.
Links to previous posts:

This week, one of the teams I work with released an internal API which provided an inventory of provisioned tenants for one of our platforms. I provided the OpenAPI spec to a coding agent. Within 10 minutes, I had a CLI and an agent skill so I could simply ask an LLM "Who owns that resource?" for operational or security response scenarios. Development teams are shipping code at that same pace. Can your security tooling keep up?

As I explored in my It Depends post, security practitioners can intervene earlier than ever before. Security expertise can be directly provided to coding agents, addressing issues before they're even committed to a codebase. This benefits both security teams and development teams by driving the cost to fix a security issue to near zero.

This is the promise shift left was always chasing. It lets security teams scale their impact. Rather than chasing an issue already in production, security guidance delivered directly to coding agents is high leverage by design.

In the Designing for Leverage post, we explored the example of using a minimal base image for containers instead of one with unnecessary packages that create extra work for development teams. A simple instruction to an LLM can enforce that standard at the point of code generation:

When a base image is needed for Python, use the one available at company.registry.com/minimal-python:version

Security teams already analyze vulnerability and misconfiguration data to identify patterns. Those patterns often turn into documentation, tickets, or training. Now they can become agent instructions instead.

A couple of years ago, most conversations about LLMs focused on hallucinations. Today development teams are already shipping code with them. I saw something similar during the transition to cloud computing. Teams that werenโ€™t curious enough about the shift were left behind. I don't want to see the same thing happen with Generative AI.

I've found that it's best to approach the space with curiosity. I'm building my understanding of the new surface area that capabilities like Model Context Protocol and code execution introduce. I'm also exploring how to defend against attacks like prompt injection. Navigating this change alongside developers builds the kind of trust we discussed in the Aligned Incentives post.

The growth in my career has come from following a simple playbook. Learning how a system works, finding points of leverage, and using them to scale my impact. Security teams will always be outnumbered by development teams, and any process that relies on a smaller team reviewing a larger team's work manually won't scale. Figure out how to remove yourself from the equation while ensuring a control stays in place. The technologies will keep changing. The playbook won't.

Top comments (0)