DevSecOps involves setting up many different automated security tools to cover all bases. It’s not uncommon for organisations to run tons of security tools due to the many types of AppSec scans that are needed to find all the types of vulnerabilities. This means a mix of commercial tools from different vendors, various open-source tools and many homegrown scripts and checks.
This ‘Tool Sprawl’ fragments the DevSecOps automation and holds it back. As the number of security tools grows and the DevSecOps processes become more complex, companies are realising the greater challenge is to make sense of all of the test results within a short period of time, like during the pipeline run.
Automating manual tasks in the process around automated security tools is very important for DevSecOps in CI/CD. The quality and effectiveness of the data from your DevSecOps processes and tech are going to be the difference between people using it and your continuous improvement, or teams simply bypassing it and it all being a waste of time.
Let’s go through the challenges in manual software testing and examine the issues when automating this DevSecOps process.
- Choosing which tools to run in CI/CD
- Running the various tooling in or out of CI/CD
- Collect the test results from all the security tools into one place
- Working out what’s changed in this software release
- Triaging all your security issues
- Prioritisation of your security issues
- Communication with various teams
- Security Metrics
We already know you’ll likely need at least 10+ security integrated tools to get enough security coverage for a single project. Chances are, your company will have different teams running different tech stacks. You may have different CI/CD pipelines designed for them. Trying to shoehorn a set of security tools as ‘one size fits all’ will not work for anyone. For example, running python security tools against a C# codeline will be a waste of time and offer a false sense of security.
Different types of tools are better at finding different types of vulnerabilities. Companies are combining DAST, SAST, SCA, IAST, Container, Cloud, and many more types of tools to find issues.
That’s why you’ll need to introduce lots more tooling to cater for the range of security checks varying teams will need. Your DevSecOps tech will not only need to handle those tools but also easily configure which are being used for a team or project. However, trying to configure these within the CI/CD pipeline logic is complicated and rigid.
The Uleska Platform allows you to pre-configure security ‘Toolkits’ that combine relevant security tools according to your projects’ tech stacks and dependencies. This is easily configurable without modifying the CI/CD logic and gives you more flexibility and control over the testing to be done, by simply adding two lines into your CI/CD workflows.
Security tools come in all shapes and sizes and you’ll likely need logic for each and every one to get them to run. So where do you put this logic? Many teams initially try to place it in CI/CD, but as it grows in complexity, it tends to fit better outside the CI/CD platform so it can be updated and extended easily.
Kubernetes and dockers tend to suit the needed architecture of running these security tools well, but creating that setup will become a project in itself. This is why we have built the Uleska Platform architecture on top of Kubernetes and containers. Users can benefit from the sizing, queuing and monitoring of running security tests during the CI/CD builds in real-time - without having to get their hands dirty.
If you were to use three SAST tools in your pipeline, it’s likely that each of them is going to report issues in different ways. As there’s no industry standard on how to report security issues, they may do the following:
- Report different fields
- Report severity in different ways
- Barely report much at all
From those three tools, one consolidated list needs to be retrieved. A call must be made on how to fix the issues at hand and what tools are required. This is where flexibility is vital, as you’ll need to add in more tools.
Essentially, this means implementing DevSecOps vulnerability management. Now your tooling extends to dynamic infrastructure, containers and other types. There’s likely no code fields with them, but now we’re handling port numbers, vulnerable HTTP requests and more.
For some companies, the task of bringing together the results from all of these tools is a full-time job, but one ideal for automation. That’s why the Uleska Platform does exactly that, with a single taxonomy for describing security issues that all results are mapped into.
For consumers of this information, including development and security teams, having a single and consistent pane of glass to flag security issues in near real-time during the CI/CD pipeline is invaluable.
Engineering and operations teams don’t have time to manage hundreds of issues across lots of security tools in every CI/CD run. DevSecOps for software security focuses on the ‘what changed?’ question. Consolidated sets of issues from the last change don’t need to be a War and Peace of every problem in the project. Remember, you’re building an efficient process to flag security issues.
Security tools are built to find every issue they can and report back. Yes, we want a baseline of all issues at the start of the project, but once we do that first triage, remove the false positives, duplicates and non-issues, we then want to move those known issues to our backlog and focus on anything new.
That means there are two things we want to know when security tools are applied to this run:
- Have any NEW security issues come up?
New issues can come from code or config changes introducing new bugs or a security tool (or database) has been updated and found a new flaw in our existing containers and libraries.
- Have any issues been fixed?
It sounds simple, but if a security tool has found an issue before, someone has applied a fix. Now the tool doesn’t find that issue so we can consider the fix assured and remove it from our lists.
With the right software, such as the Uleska Platform, historical security toolkit runs are stored and can be compared against the current run, so you can know instantly if new issues have been introduced.
How and where do you record false positives, duplicates and nonsense issues? Some commercial tools allow you to do this, while many don’t. The vast majority of open source and custom security tools can’t track this either.
Yet non-issues are the biggest problem in DevSecOps. Development teams don’t appreciate being given 100s of non-issues from security runs.
Market-leading platforms allow you to set false-positives and non-issues centrally, for all security tools and results it runs. These are sticky, meaning the next time you run the tools, all the non-issues are remembered and don’t show up in any reports, shielding development from these in communications.
Here at Uleska, we’re working on a number of new ways to make false-positive handling much more efficient, so security and development teams don’t need to waste time on this.
If you’ve ever tried running many tools against the same target, you’ll find a few interesting things:
- Not all tools will find all the same issues (you can use this to determine the effectiveness of the tools).
- Multiple tools that do find the same issue will likely prioritise it differently.
This difference in prioritisation is not only frustrating, but it’s typically only based on the technical bug and completely ignores risk aspects of the project. It neglects the number of users, the sensitivity of data processed, the criticalness of the project to the business and so on.
Maybe low-risk issues can be passed through without intervention, but the highest top 10% of risks should go back to development for fixes before release. Software like the Uleska Platform provides this automatically within the CI/CD pipeline and further metrics - speeding up the decision making process.
We’ve already examined how you can reduce the number of issues being flagged in each CI/CD run, however, there will still be an increase in the number of real issues that are flagged - DevSecOps runs at increasing speed and scale.
Requests to help describe the issue and communicating best practices or suggested fixes already in place can consume a lot of time, taking security teams away from other important tasks and slowing down development.
The Uleska Platform has built-in advisory functions that allow categories of custom remediations to be automatically incorporated into communications to development teams. This gives teams quick access to the advice, fixes, and education they need and allows security teams to scale their communication without consuming their time.
Even from a metrics point of view, the stakeholders in your company don’t want to have to go to lots of different security tools and make up the statistics. It’s inefficient and again becomes someone’s job, which prevents them from doing something more productive. It can take days to bring all the info together, meaning it’s already out of date.
Automating this during the CI/CD pipeline not only gives near real-time metrics on security, but since it’s being updated every release, you have a lot more granularity on the measurements. Seeing the difference between month one and month three, daily, is a lot better than measuring the difference once a quarter.
To discover more about the challenges of automating DevSecOps and how to overcome them, check out our playbook.
The problem with DevSecOps is incorporating many layers of security tasks into the fast-paced software development cycle. Thankfully, there are a variety of things you can do to overcome the challenges faced. In our playbook, we cover the top 10 challenges of automating DevSecOps, while also delivering actionable advice on how to overcome them.