We take a look at 3 important AppSec tools and 8 metrics you should track over time.
What is not monitored is not measured. Application Security today is an increasingly data-driven practice that benchmarks success on measurable improvements in code quality and code security. But which metrics are the right ones and what do they mean? This post will cover metrics you should be monitoring for application security and why they matter.
Application security is the practice of making sure that software applications are secure from attacks. Application security teams apply security measures at the application level with the goal of preventing attackers or unauthorized systems from accessing data or code within the app. Application security is responsible for securing applications against a broad range of potential attacks including data exfiltration, data poisoning, malware installation, and account takeover, to name a few. Application security not only covers application-level security controls but also is responsible for making code as secure as possible during the software development process in order to pre-empt later attacks. This includes teaching secure coding practices. More recently, application security has “shifted-left”, empowering developers to perform application security screens and checks earlier in the development process and more frequently — often at the level of pull request before a commit hits the main trunk of a codebase.
In today’s world of ubiquitous connectivity, application security is increasingly critical because today’s applications have more attack surface than ever. As applications decompose into microservices and APIs, the entire concept of a self-contained application morphs into applications being a discrete bundle of connected services packaged for a specific purpose. But, these services are still networked to the Internet and inextricably linked to the outside world. In addition, today’s applications are increasingly composed of third-party code such as libraries, frameworks, and SDKs. It is the duty of the application security (AppSec) team to make sense out of this rapidly changing and expanding application universe by constantly testing application code for weakness and checking whether known, Common Vulnerabilities and Exposures (CVEs) remain unpatched. The modern DevOps and high velocity code movement, which deploys new code far more frequently, is forcing AppSec to move faster, test earlier, and automate more of its processes for testing in the CI/CD pipeline. In addition to speed and automation challenges, there are so many CVEs today that the real challenge facing AppSec teams is effectively triaging alerts to zero in on the most “attackable” vulnerabilities.
There is a common set of useful application security tools that every AppSec team should consider. These tools should be validated against the OWASP Benchmark, the gold standard for accuracy and sensitivity of application security testing. These tools generate metrics, unlike more manual techniques like penetration testing.
SAST scans an application’s source, binary, or byte code and looks for vulnerabilities in the code. Because it is looking at the actual source code, SAST is a “white-box” testing tool. SAST can be focused on specific languages that are in use in an application, rather than general behaviors. Because SAST looks at code from the inside, it typically does not have a view into compounded behaviors or interactions with third-party code and libraries that is increasingly included in modern applications. SAST can provide immediate indications to developers pointing them to specific vulnerabilities. Because traditional SAST has a limited view of the application behavior, it tends to generate a high percentage of false positives. To get the most out of SAST, organizations should run it frequently to maintain relevance in environments where code is changing rapidly.
- Scans actual source code
- Can provide rapid feedback to developers.
- Focuses on languages in use
- Can’t understand application behaviors
- High risk of reporting false positives
- Slow, traditional options get outdated quickly
SCA tools are used for managing open source components that are included in complex application architectures. Like SAST, SCA is a white-box tool, analyzing applications from the inside out. SCA allows development and AppSec teams to track and trace open source components in their applications, identifying critical dependencies. This is increasingly important in the realm of supply chain security. Because open source libraries are often nested inside of other libraries, SCA can identify indirect dependencies that are invisible to developers unless they closely examine the source code and architecture of libraries. SCA tools are also used for software license detection and analysis and for upgrading open source packages and libraries to the latest versions automatically. SCA scans generate a software Bill of Materials (BOM) which is a useful inventory of all open source code included in an application. More advanced versions can even provide insights into how an attack might proceed down a specific data path. Some vendors in this space also supply secure versions of libraries that have not been secured by their maintainers. That said, SCA can introduce problems when automatic library upgrades end up breaking the application and causing problems in software repositories.
- Provides critical visibility into the open source dependencies
- Relatively easy to run on a continuous basis
- Dynamically upgrades outdated libraries
- Cannot provide insights into non-open source code and how it interacts with open source code
- Dynamic upgrading of libraries can break applications
DAST is a tool that analyzes the security of applications by attempting simulated attacks against exposed surfaces such as the front-end or APIs. DAST has no knowledge of the code; it is “black box” testing. A DAST scanner simulates known attacks from the MITRE ATT&CK framework and identifies anomalous results that could indicate security vulnerabilities.
- Application-independent, does not require much tuning
- Finds vulnerabilities that are likely to be exploited
- Does not require access to the source code
- Cannot be run early in the development process or against portions of an application
- Cannot pinpoint the location of vulnerability in code
- Complicated to interpret results
- Testing can take a lot of time.
You can easily bury yourself in metrics. OWASP outlines over 200 ways to think about web application metrics for example. A good approach is to stay focused on a smaller group of the most obvious and critically important application security metrics. Here is a shortlist of some of the best application security metrics to measure and track.
Any automated AppSec testing technology will generate a significant percentage of false positives. For this reason, the raw number of vulnerabilities found can be a misleading metric. More important is to determine whether identified risks and vulnerabilities are actually attackable — meaning, an attacker can use them to exploit and compromise an application and as such, is likely to pose a real risk.
The fix rate is the percentage of attackable vulnerabilities that you actually fix. This should be very close to 100% if the vulnerabilities are verified as attackable. In most cases, this is less daunting than it sounds. In our research, less than 10% of CVEs identified by SCA tools are actually attackable. That said, you need to ensure that the AppSec team communicates clearly with the development team on the severity and urgency of each vulnerability. Should the fix rate track downwards, then you need to consider how to more quickly and thoroughly address attackable vulnerabilities.
This is the amount of time that passes on average between a vulnerability being reported and it being fixed by development eams. Directionally, the period between report and fix should be shrinking over time. To compare your performance, AppSec teams should consider industry benchmarks and surveys for time-to-fix. Keep in mind, these may not be as relevant for attackable vulnerabilities, because many AppSec and development teams realize that the majority of unfiltered vulnerabilities are false positives and not real risks, which are left intentionally unfixed.
You can get a basic idea about vulnerability severity using CVE databases, the MITRE ATT&CK Framework, or OWASP ratings. Severity tells you the level of risk found in each vulnerability. Compiling this as an average gives AppSec teams a good idea of what level of risk their attackable vulnerabilities are presenting. This is useful because it does relate to code quality and developer awareness of vulnerabilities. If the majority of vulnerabilities coming back are “high”, then this implies the development team could use improvement in secure coding practices or in selecting more reliable open source components.
OWASP breaks down in great detail the various types of application vulnerabilities that can be found in custom code. By monitoring which types of vulnerabilities are most common and the trends in detecting and fixing each, an AppSec team can better tune its testing infrastructure and better improve secure coding efforts. Because the majority of vulnerabilities tend to fall into a few buckets, this will likely be a less dynamic indicator and is probably best examined on a semi-annual or yearly basis to look for larger trends.
Recommended by the SANS Institute, attack density is the percentage of attackable vulnerabilities concentrated in a specific area. Security Operations teams collect detailed data on known attack attempts. The area of focus can be defined by language, function, part of the application stack, type of attack, or environment. This is useful information for proactive security measures and can guide prioritization of secure coding efforts of testing of security controls.
This metric examines the same parameters as Actual Atack Density but does so through the lens of outputs by your application’s architecture. Understanding the density of attackable applications through this lens provides useful guidance on how to teach more secure coding and which teams or team members may be the ones most in need of support. Alternatively, this data can inform efforts on which parts of larger applications should receive more frequent or more intensive code audits and manual penetration testing.
Above all, AppSec teams should measure all of the metrics they follow over time to understand improvement or degradation of application security efforts. More advanced teams often create compound security metrics numbers — a single metric — to help management teams better understand the directional trends in security. AppSec teams can do this easily by assigning a weight to each of the metrics they follow and aggregating the weighted metrics into a single security metric indicator.
Metrics on attackable vulnerabilities are the first step towards establishing the real accountability required to foster change. While the concept of improving AppSec and shifting security left is appealing, the reality is, without metrics to measure progress, security efforts are doomed to be “Feel Good Security” rather than “Real Good Security.” Fortunately, most security testing tools have APIs that allow for export of data into other systems. Dashboards using open source tools like Grafana and Kibana are easy to create for almost any backend. A robust metric program also becomes the first step towards a more holistic view of application security, one that takes into account how all the different pieces of an application interact and leverages all the testing modalities to create a more accurate and intelligent view of security. This is key for AppSec teams that want to shift security left and empower developers to learn how to secure code better and fix their code as early as possible in the development process.
To get started finding attackable threats in custom and open source code, create a free account of ShiftLeft CORE at https://www.shiftleft.io/register.