DEV Community

Patience Mpofu
Patience Mpofu

Posted on

What Building a SAST Tool Taught Me About AppSec That 13 Years of Software Engineering Didn't

I've been writing software professionally since 2011.

Java, C#, Kotlin, Node.js. Enterprise backends, microservices, APIs, data pipelines. I've shipped production code that millions of people have used without knowing it. I've led teams, reviewed architectures, mentored junior engineers, and done all the things that accumulate into what people call "senior software engineer."

And yet, when I decided to transition into application security, I realised I had significant blind spots — not about how software works, but about how software fails. Specifically, how it fails in ways that attackers can exploit.

This is the final article in a series about building a SAST scanner from scratch, embedding it in CI/CD pipelines, writing custom detection rules, and managing false positives. But it's really about what that whole process taught me about application security as a discipline — and what I wish I'd understood earlier.


I Knew How to Write Secure Code. I Didn't Know Why It Was Secure.

Here's an embarrassing admission: I've been using parameterised queries for SQL for at least a decade. I knew you were supposed to use them. I used them every time. I would have told you confidently that they prevent SQL injection.

But if you'd asked me, before I started studying AppSec seriously, to explain why they prevent SQL injection — the actual mechanism — I would have given you a hand-wavy answer about "the database handling it separately."

Building the SQL injection detection rule forced me to get precise. I had to understand exactly what makes "SELECT * FROM users WHERE id = " + userId dangerous, what makes SELECT * FROM users WHERE id = ? with a bound parameter safe, and why the difference matters at the level of how the database parses and executes the statement.

The answer — that parameterised queries send the query structure and the data in separate messages, so the database never attempts to parse the data as SQL syntax — is not complicated. But I didn't actually know it at that level of precision until I had to write a rule that distinguishes between the two patterns.

This was a theme throughout the project. I knew the what of secure coding from years of following conventions and best practices. Building detection rules forced me to learn the why — the actual attack mechanics that the conventions are defending against.

The lesson: Knowing the secure pattern is not the same as understanding the vulnerability. For a software engineer, the secure pattern is enough to write safe code. For an AppSec engineer, you need to understand the attack, because your job is to find it when someone else didn't write the safe pattern.


Security Is an Adversarial Discipline

Software engineering is largely a collaborative discipline. You're building something. The goal is for it to work. Your mental model of the system is oriented around the happy path — the flow where inputs are valid, networks are reliable, and users do what you expect.

AppSec is adversarial. The mental shift required is genuinely disorienting at first.

When I was building the JWT algorithm none rule, I had to think like someone who wants to forge authentication tokens. Not because I want to do that, but because unless I understand exactly how the attack works — what the attacker controls, what assumptions the vulnerable code makes, what the exploit chain looks like — I can't write a rule that reliably detects it.

This is the skill that 13 years of software engineering didn't develop: adversarial thinking. The question isn't "does this code do what it's supposed to do?" It's "how could someone make this code do something it's not supposed to do?"

The OWASP Top 10 is, at its core, a catalogue of the assumptions developers make that attackers exploit. A03 — Injection assumes that input is data, not instructions. A07 — Authentication Failures assumes that the code correctly validates identity. A02 — Cryptographic Failures assumes that encryption means the data is protected.

Every category is a place where the developer's mental model of the system diverges from what an attacker can actually do to it. Understanding OWASP deeply means understanding those divergences — not as a checklist, but as a way of thinking.

The lesson: You can't find vulnerabilities you can't imagine. Developing adversarial thinking — the habit of asking "how could this go wrong for someone who wants it to go wrong" — is the most important cognitive shift in the AppSec transition.


Tools Are Amplifiers, Not Answers

Before I built my own SAST tool, I used SAST tools. And I treated them roughly like a compiler warning: something fires, I look at it, I decide whether to fix it or ignore it.

Building one changed how I think about what a SAST tool actually is.

A SAST tool is a codified set of heuristics about what vulnerable code looks like. Those heuristics are written by humans, based on human understanding of vulnerability patterns, with human decisions about confidence levels and severity ratings. The tool doesn't know your codebase. It doesn't know your threat model. It doesn't know whether the finding it just generated is actually exploitable in your specific deployment context.

This sounds like a criticism. It isn't. It's a description of a tool's appropriate role.

When I run Snyk or Semgrep now, I engage with the results differently than I did before. I ask: what pattern is this rule trying to catch? Is that pattern present in my code for the reason the rule assumes? Does the vulnerability the rule targets actually apply in my context? What would an attacker need to control to exploit this?

Those are AppSec questions, not DevOps questions. A DevOps mindset treats SAST output as a compliance gate. An AppSec mindset treats it as a starting point for analysis.

The lesson: A SAST scanner is a signal generator, not an oracle. The value it provides is proportional to the quality of thinking applied to its output — not to the number of findings it generates or suppresses.


False Positives Taught Me About Risk Tolerance

Every time I suppressed a finding in my own scanner, I had to make a decision: is this actually safe, and how confident am I?

That turns out to be the central skill of AppSec: structured risk assessment under uncertainty.

You almost never have complete information. You can't always trace every data flow through a complex system. You can't always know whether a finding is exploitable without building a proof of concept. You have to make a judgment call about whether the risk is acceptable given what you know.

What I learned from managing false positives is that risk tolerance is not a feeling — it's a position that needs to be documented and defensible. "I suppressed this because it looked fine" is not a risk assessment. "I suppressed this because the data being processed is always from our internal configuration system and never from user input, as confirmed by tracing the call stack in lines 42–67" is a risk assessment.

The difference matters when something goes wrong. And in security, things go wrong.

The lesson: Risk assessment is a core AppSec competency, not a soft skill. Developing a structured, documented approach to risk decisions — even informal ones — is more valuable than any specific technical knowledge.


The Gap Between Writing Secure Code and Finding Insecure Code

These are related skills. They are not the same skill.

Writing secure code is a constructive activity. You know what you're building. You apply secure patterns. You follow established conventions. The feedback loop is relatively tight — if you use parameterised queries, you know you're not vulnerable to SQL injection there.

Finding insecure code is a forensic activity. You're examining code you didn't write, often without full context, looking for patterns that indicate vulnerability. The feedback loop is loose — you might flag something, triage it, determine it's a false positive, and never know whether your triage was correct.

The cognitive skills are different. Construction requires knowing the secure pattern. Detection requires knowing the vulnerable pattern and all its variations. It requires understanding which variations are genuinely dangerous and which are contextually safe. It requires maintaining a mental model of an attacker's perspective while reading code that was written from a developer's perspective.

I've spent 13 years getting good at construction. Building this scanner was the first systematic exercise I did in detection. It was harder than I expected — not technically, but cognitively. Shifting from "I'm building this thing to work" to "I'm looking for ways this thing could be exploited" is a genuine gear change.

The lesson: AppSec is not "software engineering plus security knowledge." It's a different cognitive discipline that happens to use the same raw material. Senior software engineers making this transition should expect a genuine learning curve, not just a knowledge gap.


What I'd Tell Someone Starting This Transition

If you're a software engineer moving into AppSec — or considering it — here's what I'd tell you based on this project and the broader transition.

Build something. Reading about OWASP is useful. Reading CVE writeups is useful. Neither teaches you what building a detection rule teaches you. The act of translating "this is a vulnerability" into "this is what the vulnerable code looks like in text" forces a precision of understanding that passive learning doesn't produce.

Study the attacks, not just the defences. Most of your software engineering career was spent learning defences — secure patterns, safe APIs, frameworks that handle the dangerous parts for you. AppSec requires understanding the attacks those defences are designed against. Read exploit writeups. Understand how CVEs actually work. Build your own vulnerable applications and attack them.

Get comfortable with ambiguity. Software engineering has right answers. Does this code compile? Does this test pass? Does this function return the correct value? AppSec often doesn't. Is this finding exploitable? Is this suppression justified? Is this risk acceptable? These questions frequently don't have clean answers, and developing comfort with that ambiguity is part of the transition.

Use your engineering background as a superpower, not a crutch. The thing that makes engineers valuable in AppSec is the ability to read code at scale, understand system architecture, and reason about data flows — skills most pure security professionals develop slowly. Use that. But don't assume that understanding how the code is supposed to work means you understand how it can be broken.

Write about what you're learning. This series started as a way to document my own thinking. Every article forced me to be more precise about something I thought I understood. The act of explaining something to someone else reveals the gaps in your own understanding faster than almost anything else.


Where This Goes Next

Building this scanner and writing this series was one project. The transition is ongoing.

The next project is taking an old Java service and doing something I haven't done yet in this series: running Snyk against a real dependency tree on real legacy code, remediating real CVEs, and measuring the before-and-after security posture with actual metrics.

That's a different kind of AppSec work — Software Composition Analysis rather than static analysis, dependency vulnerabilities rather than code vulnerabilities, Snyk's recommendations rather than my own rules. But the underlying skills are the same: understand the attack, assess the risk, make a defensible decision, measure the outcome.

The transition from software engineer to AppSec engineer is not a destination. It's an ongoing process of developing adversarial thinking, structured risk assessment, and the forensic discipline of finding what's broken rather than building what works.

Thirteen years in, I'm still learning. That's the right state to be in.


The full SAST tool that this series was built around is at github.com/pgmpofu/sast-tool.

If this series was useful to you — or if you're making a similar transition and want to compare notes — I'd genuinely like to hear from you. Find me here on dev.to or connect on LinkedIn.

Top comments (0)