In the second part of the series, we move to addressing causes of security errors by combining testing approaches across the entire application landscape. You will find here tools and ways battle-tested in Cossack Labs’ security software products lifecycle 🔐
Examining the code itself could save a lot of time and effort.
Modern high-level languages and platforms mostly shield developers from buffer overflows and remote code execution that are among the most dangerous and damaging security issues possible. However, libraries often stand on the shoulders of giants and reuse code in unsafe, low-level languages.
Yet, with static application security testing (SAST) you can automate detecting memory problems, undefined behaviour, and other glitches that lead to attacks on execution flow. Most of the warnings to look at here are memory leaks, buffer overflows, and such. Brace yourself and beware of false positives.
📎In finding bugs in code, take as your starting points:
⚙️ OWASP list of some SAST tools,
⚙️ NIST list of source code security analyzers,
⚙️ this well-kept and updated glossary of the newest modern source SCA security tools.
“Never trust your input” is one of the cardinal rules of computer programming.
Fuzzing is an automated process in software testing that takes advantage of this rule and searches for exploitable bugs through feeding random, invalid, and unexpected inputs to the tested software.
Fuzzing is especially useful for long-living and frequently used code modules—like libraries, reusable modules, system apps, security controls in your apps.
Let’s say, you wrote an authentication module and used it everywhere across your backend apps: it makes sense to fuzz it and fix found issues.
The rule of thumb is:
the more frequently you use a module or, the more risky assets it protects, the better target it becomes for fuzzing.
The data provided by the fuzzer is just “ok” enough for the parser not to reject it, but is rubbish otherwise and can take the software far away from the usual code paths exercised by your test suites and users. This helps surface the vulnerabilities that would be undetectable in a different way.
Another advantage of fuzzing is that this kind of testing executes actual code and thus it is almost fully devoid of false positives (which quite often take place with static analysers).
Security-wise, fuzzing is both about testing the security controls and testing memory behaviour, as bugs like famous Heartbleed (which combines both poor security controls and unexpected memory behaviour) could’ve been fuzzed easily.
For security testing, fuzzing is especially useful when it comes to feeding into the app such pseudo-valid inputs that cross the trust boundary. A trust boundary violation takes place when the tested app is made to trust the unvalidated data fed into it. The approach mimics an adversary trying to feed malicious content into the app in the hope of achieving privilege escalation or plain malfunction, crash, etc. (for example, check this Google repo where they posted fuzzing dictionaries for popular tools).
Fuzzing tests need to be well-thought-out, well-planned, and well-written. Fuzzing is also affected by the execution environment (the ripple effect), configuration, and capabilities of the test suite.
It is not a magic bullet for all your automated security testing, but it will take you far—as far as you’re willing to invest time and effort into preparation.
To get started with fuzzing, we would recommend to explore this curated list of fuzzing resources. Also, in the last part of the series, you can find approaches and tools used for fuzzing of our cryptographic library Themis.
Popped into attention of general developers just recently, secrets detection and their cleaning up before code gets public is essential to avoid sensitive data leak. In essence, the goal is to make sure that no IP-address, token, password or key gets leaked when you push code into a public repository.
With a growing trend of not really looking into which dependencies you use, which dependencies are dependencies of your dependencies, and what exactly is hidden under 5 layers of dependency relationship… making sure that you don’t have a vulnerable dependency somewhere down the line is essential.
While these tools are fairly novel and are yet to reveal their power in full, they already bring peace of mind for constantly-shifting ecosystems like iOS, Node.js, Python and Ruby.
Apart from actually testing the code you write, it could be fruitful to test larger entities: whole services and infrastructural components. It is crucial when the product you’re developing consists of numerous high-level entities, micro-services, and holds varied autonomous dependencies.
Some vulnerability scanners could help with automating security testing through scanning your websites and/or network for a huge number of known risks. As a result of such testing, you can get a list of vulnerabilities detected in your infrastructure and recommendations on how they can be patched or otherwise secured.
Another way for patching is to push the process to the automatic vulnerability scanners. This is specifically relevant for software that is being composed, rather than top-down written, and consists of many services, libraries, and chunks of code.
Some of the popular free network vulnerability scanners for your consideration include Open Vulnerability Assessment System (OpenVAS), Microsoft Baseline Security Analyzer, Nexpose Community Edition, Retina CS Community, SecureCheq.
It is worth noting that infrastructures should be checked when they are complete and functional (live or near-live) for the maximum impact and usefulness of the check-up.
☛ Find out more on automated software security testing in a wider context of performance and reliability in the next part of the series.