DEV Community

Cover image for Dispatch From the Other Side: Designing for Leverage
Anthony Barbieri
Anthony Barbieri

Posted on

Dispatch From the Other Side: Designing for Leverage

This is Part 2 of the series. You can read part 1 here!

When I first started in the industry, security reviews were mostly still manual. Security was the department of "no" and could hold up a quarterly release until they were satisfied all required controls were in place. It was before the DevOps movement had hit large enterprises and cycle times to make any change in production could span months.

As I saw the adoption of code pipelines grow within my company, I also saw the security industry try to modernize alongside it. DevSecOps and "shifting left" became all the rage. While this allowed security practitioners to catch issues earlier and reduce the cost to fix them, breaking the build too often created friction.

In a CI/CD world, every push triggers a pipeline. If that pipeline takes 10–15 minutes, that delay scales across the organization. Multiplied by hundreds or thousands of engineers, those minutes become real drag on delivery.

Legacy code scanning tools were usually limited to ad-hoc testing rather than being part of the pipelines because of how long they took to run. While improvements have been made since that time, having to context switch to patch a new vulnerability in the middle of trying to do something else doesn't create fans of security tooling.

Code pipelines can also be the means of responding to operational incidents. I have witnessed the deployment of a fix being blocked by a security scan. It broke the build as it was heading for production, leading to longer recovery times.

While the "shift left" concept promised a lot, we stopped short of how far we could go. High quality and fast scanners can be added to pipelines, but it's even better to address issues before they are ever introduced into a codebase in the first place.

Rather than detecting and adding to an ever-growing pile of misconfigurations and vulnerabilities, security teams can partner with internal shared service teams to reduce the surface area altogether. Encryption at rest can be enabled by default, and not a parameter a development team has to remember to enable. No new detection to triage in the pipeline, but all future instances of the problem disappear. This is where "paved roads" start to demonstrate their value.

Similarly, with the growth in adoption of containers, a well maintained minimal base image eliminates vulnerabilities that have nothing to do with the application. Instead of patching OS packages the application never uses, these shared base images only provide what's needed to run it. By addressing the problem systematically, vulnerability counts drop significantly, rather than just trying to automate response to individual detections.

When some flexibility beyond a silent and secure default is needed, I've found that registering the opt-out explicitly helps with auditability. When encryption is on by default, forcing a deliberate configuration like "encryption_disabled: true" is easier to detect than a missing configuration line.

In order for these controls and systematic approaches to be effective, security practitioners must strive to understand the platforms in use, and how developers consume them. While not everyone needs to be a subject matter expert, finding key partnerships and systems thinking will get you further than chasing individual findings.

Identifying the source of a problem and solving it there, rather than treating the symptoms, is what separates reactive security from effective security. With code generation only getting faster with generative AI, security teams need high-leverage approaches to reduce risk in the environment. In our next post we'll explore ways of working, incentive structures, and earning credibility.

Top comments (0)