<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: C.J. May</title>
    <description>The latest articles on DEV Community by C.J. May (@lawndoc).</description>
    <link>https://dev.to/lawndoc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lawndoc"/>
    <language>en</language>
    <item>
      <title>Making Sense of Open-Source Vulnerability Databases: NVD, OSV, etc.</title>
      <dc:creator>C.J. May</dc:creator>
      <pubDate>Mon, 07 Oct 2024 15:07:58 +0000</pubDate>
      <link>https://dev.to/gitguardian/making-sense-of-open-source-vulnerability-databases-nvd-osv-etc-4g9f</link>
      <guid>https://dev.to/gitguardian/making-sense-of-open-source-vulnerability-databases-nvd-osv-etc-4g9f</guid>
      <description>&lt;p&gt;Open-source vulnerabilities are arguably the most ubiquitous part of application security. Software developers are constantly plagued by an endless stream of vulnerabilities in the packages their applications are built upon. It would be impossible to keep track of every vulnerability that needs to be addressed if we didn’t have some sort of standard cataloging system. &lt;strong&gt;That’s where vulnerability databases come in.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Over time, multiple vulnerability databases have been introduced that do seemingly similar things. Their intents and purposes are obscured by acronyms like &lt;strong&gt;CVE&lt;/strong&gt;, &lt;strong&gt;NVD&lt;/strong&gt;, &lt;strong&gt;OSS&lt;/strong&gt;, and &lt;strong&gt;OSV&lt;/strong&gt;. Collectively, they provide a wealth of information about software vulnerabilities. But the slow sprawl of the vulnerability database ecosystem has started to make things a bit unclear. &lt;/p&gt;

&lt;p&gt;In this blog post, we will cover many vulnerability databases and supplementary systems to help you cut through the noise. We will focus on databases that have high relevance to vulnerabilities in open-source software. At the end, you will find a Venn diagram that roughly summarizes the overall coverage of each database as well as some recommendations on how we can improve vulnerability tracking in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Foundations of vulnerability management
&lt;/h2&gt;

&lt;p&gt;Before comparing different databases, we need to understand a bit of history about the tracking of vulnerabilities. MITRE and NIST were the first to implement widely adopted standards for vulnerability enumeration and tracking.&lt;/p&gt;

&lt;h3&gt;
  
  
  CVE: a vulnerability identification standard
&lt;/h3&gt;

&lt;p&gt;In 1999, MITRE introduced the &lt;a href="https://cve.mitre.org/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;CVE&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; (Common Vulnerabilities and Exposures) standard, which, at the time, was like the Rosetta Stone for security issues in software. It enabled software vendors and consumers to clearly reference specific vulnerabilities and their patches. CVE is still the most widely recognized standard for vulnerability identification.&lt;/p&gt;

&lt;p&gt;The key thing to understand about CVE is that &lt;strong&gt;it is solely an identification system&lt;/strong&gt;. MITRE maintains a list of CVE IDs, but each entry only contains information that will identify the vulnerability.&lt;/p&gt;

&lt;h3&gt;
  
  
  NVD: a comprehensive vulnerability database
&lt;/h3&gt;

&lt;p&gt;The CVE list maintained by MITRE is continuously synchronized to the &lt;a href="https://nvd.nist.gov/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;NVD&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; (National Vulnerability Database), which is run by the US government organization NIST. The NVD is the first true “vulnerability database” that we have covered so far. It was created to add additional context to each CVE. This context includes vulnerability categories in the form of &lt;strong&gt;CWE&lt;/strong&gt; IDs, &lt;strong&gt;CVSS&lt;/strong&gt; severity scores, a &lt;strong&gt;CPE&lt;/strong&gt; ID to identify the vulnerable software, and details on whether the software vendor has released a fix for the vulnerability.&lt;/p&gt;

&lt;p&gt;The NVD is a long-established vulnerability database with a large ecosystem of standards built around it, &lt;strong&gt;but using the NVD alone for open-source vulnerability tracking isn’t ideal for multiple reasons.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First and foremost, the NVD has been experiencing issues the last six months (possibly due to its reliance on human analysis for enrichment data). The issues began with missing details in some CVEs, and now the situation has devolved into spurts of missing CVEs and incomplete analysis of what does come through. If you want to learn more about the recent NVD issues, you can read more &lt;a href="https://www.infosecurity-magazine.com/news/nvd-exploited-vulnerabilities/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;here&lt;/u&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another area where the NVD is &lt;strong&gt;lacking coverage is for malicious packages&lt;/strong&gt;. Technically, a malicious open-source package isn’t a vulnerability – it’s intentionally backdoored. Since these packages aren’t assigned CVEs, the NVD doesn’t provide a way for them to be tracked and detected using the NVD alone.&lt;/p&gt;

&lt;p&gt;One final distinction that we have glossed over so far is that CVEs are assigned to both open-source and commercial software. Commercial software vulnerabilities are important to corporate security (CorpSec), but they aren’t as relevant to software developers who are mainly concerned about the open-source dependencies that they use in their code. Open-source vulnerabilities have far-reaching consequences due to their incorporation into other software.&lt;/p&gt;

&lt;p&gt;Open-source vulnerability databases have emerged to help developers overcome the challenges of monitoring software dependencies with the NVD. These databases aggregate vulnerability information from multiple sources, including the NVD. This culmination of sources makes them more comprehensive and up-to-date than the NVD when it comes to tracking open-source libraries.&lt;/p&gt;

&lt;h3&gt;
  
  
  OSV: an open schema and vulnerability database
&lt;/h3&gt;

&lt;p&gt;The Open Source Vulnerability (OSV) project was launched in 2021 with the release of the &lt;a href="https://ossf.github.io/osv-schema/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;OSV data format&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;. The OSV format was created with the goal of providing vulnerability information that was as &lt;strong&gt;actionable as possible&lt;/strong&gt; in a &lt;strong&gt;machine-readable format&lt;/strong&gt;. Providing additional context in a structured format allows for more automated triage and fixes. The OSV data format was donated to the &lt;a href="https://openssf.org/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;OpenSSF&lt;/u&gt;&lt;/a&gt;, which works with the open-source software community on adoption of the OSV format.&lt;/p&gt;

&lt;p&gt;The OSV project also maintains an open-source database called &lt;a href="https://osv.dev/list?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;OSV.dev&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;. This database is sponsored by Google who payrolls engineers on their open-source security team and hosts the infrastructure for the database. Despite being maintained by a private company, the project and its API are &lt;strong&gt;completely free&lt;/strong&gt; and open-sourced under the Apache 2.0 license.&lt;/p&gt;

&lt;p&gt;OSV.dev automatically aggregates vulnerability information and alerts about malicious packages from &lt;a href="https://github.com/google/osv.dev/blob/master/source.yaml?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;24 data sources&lt;/u&gt;&lt;/a&gt;. The data sources include the NVD as well as upstream sources like the &lt;a href="https://github.com/advisories?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;GitHub Advisory Database&lt;/u&gt;&lt;/a&gt;, which support the OSV format and contain more timely information on vulnerabilities. When a vulnerability is referenced in multiple sources, it is automatically associated with its aliases.&lt;/p&gt;

&lt;p&gt;The automated nature of OSV.dev makes it reliant on its data sources for some context, but it does &lt;a href="https://github.com/google/osv.dev/blob/master/docs/faq.md?ref=blog.gitguardian.com#what-does-osvdev-do-to-the-records-it-imports" rel="noopener noreferrer"&gt;&lt;u&gt;automatically enrich&lt;/u&gt;&lt;/a&gt; vulnerabilities with some information. For example, they will automatically expand affected version ranges to an explicit list of affected versions. By doing this heavy lifting up front, OSV.dev takes away some of the calculations that would need to otherwise be done by the consumers of the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Commercially backed vulnerability databases
&lt;/h3&gt;

&lt;p&gt;There are also commercial vulnerability databases that are focused on open-source packages. These databases are each backed by a private company, and they have varying levels of openness.&lt;/p&gt;

&lt;p&gt;The most open commercial vulnerability database is &lt;strong&gt;Sonatype OSS Index&lt;/strong&gt;. You can query the OSS Index API for free without an account. If you sign up for a free account, you can get higher rate limits. Like OSV.dev, Sonatype OSS index is aggregated from public sources and does not do additional human analysis. It is not open-source nor interoperable with the open OSV data format, but OSS Index is a great alternative to OSV.dev with similar levels of coverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snyk&lt;/strong&gt; is a commercial application security company that also maintains its own &lt;a href="https://security.snyk.io/vuln?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;vulnerability database&lt;/u&gt;&lt;/a&gt;. Snyk aggregates vulnerability and malicious package info from public sources, and it is also backed by a team of security analysts that review some entries and add additional context. Features that are unique to Snyk’s database are potentially &lt;strong&gt;unpublished vulnerabilities&lt;/strong&gt; (sourced from forums and commit history), &lt;strong&gt;container image analysis&lt;/strong&gt;, and &lt;strong&gt;cloud misconfiguration information&lt;/strong&gt;. Snyk’s database is &lt;em&gt;not&lt;/em&gt; fully open, however. It can only be accessed through Snyk’s own tools (free for individuals) or its Enterprise-only API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://vulncheck.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Vulncheck&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; is a commercial database that, like Snyk, is backed by human analysts that provide additional context to vulnerabilities. However, in terms of software coverage it is more akin to the NVD. Vulncheck is focused on both open-source and commercial software tracked by CVE IDs. It claims to provide early access to unpublished vulnerabilities as well.&lt;/p&gt;

&lt;p&gt;While Vulncheck’s vulnerability database is a closed and paid offering, they do offer some unique free offerings: &lt;a href="https://vulncheck.com/kev?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;Known Exploited Vulnerabilities&lt;/u&gt;&lt;/a&gt; catalog, &lt;a href="https://vulncheck.com/nvd2?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;NVD++&lt;/u&gt;&lt;/a&gt;, and &lt;a href="https://vulncheck.com/xdb?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;XDB&lt;/u&gt;&lt;/a&gt; (a list of exploits in git repositories). NVD++ is notable for being an enhanced mirror of the NVD that contains some of the data that the NVD has been missing in recent months.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other vulnerability databases
&lt;/h3&gt;

&lt;p&gt;Some vulnerability databases were left out of the discussion in this blog post because they were more relevant to CorpSec than to open-source libraries and software development. There are two that are close enough to be worth mentioning though:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://www.cloudvulndb.org/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;Cloud Vulnerability Database&lt;/u&gt;&lt;/a&gt;: a database maintained by Wiz for tracking vulnerabilities in cloud hosting providers&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://notcve.org/about.html?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;!CVE Project&lt;/u&gt;&lt;/a&gt;: a database of security issues that were denied or not acknowledged by vendors&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Future improvements
&lt;/h2&gt;

&lt;p&gt;While writing this blog post, I spoke to &lt;a href="https://www.linkedin.com/in/apollock/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;Andrew Pollock&lt;/u&gt;&lt;/a&gt;, an open-source maintainer and software engineer at Google who works on the OSV project. He answered some questions I had about the OSV project and the challenges of maintaining a vulnerability database. When discussing the challenges, he called out three things as opportunities for improvement in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vulnerable symbol disclosure
&lt;/h3&gt;

&lt;p&gt;A common problem with open-source vulnerabilities is that they are prone to false positives. Let’s say you call FunctionA from a package in your code, but a critical vulnerability was found in FunctionB. If you are just basing your vulnerability triage on the version of the package you are using, then it looks like your project is vulnerable.&lt;/p&gt;

&lt;p&gt;If there was a way to identify which functions or “symbols” are related to a vulnerability in a package, we could &lt;strong&gt;know for sure if our code is calling the vulnerable section&lt;/strong&gt;. That’s exactly what vulnerable symbol disclosure is for. &lt;strong&gt;Goodbye false positives!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vulnerable symbol disclosure is undergoing initial adoption and lacks a dedicated field in the current OSV schema. Right now, projects like the Go standard library include it in the “ecosystem_specific” field when publishing OSV disclosures (here’s &lt;a href="https://osv.dev/vulnerability/GO-2024-2963?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;u&gt;an example&lt;/u&gt;&lt;/a&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Standardized released practices
&lt;/h3&gt;

&lt;p&gt;A couple of challenges that vulnerability database maintainers face are discovering untracked vulnerabilities and providing enrichment to a vulnerability based on release notes or commit messages. If &lt;strong&gt;releases or commits that contain security fixes were clearly tagged&lt;/strong&gt;, it could enhance the speed and quality of vulnerability database entries.&lt;/p&gt;

&lt;p&gt;A counter-argument to this practice might be that this would also help bad actors discover and develop exploits for open-source vulnerabilities. However, a tagged security fix means that the vulnerability can be remediated. &lt;strong&gt;The sooner databases can track a vulnerability, the sooner software developers will fix the issue in their code.&lt;/strong&gt; An open-source vulnerability will exist and be visible for the same amount of time whether the fix is tagged or not.&lt;/p&gt;

&lt;h3&gt;
  
  
  Detailed vulnerability disclosures
&lt;/h3&gt;

&lt;p&gt;Lastly, the level of detail and actionability in vulnerability disclosures varies greatly between vendors. Some software publishers do a fantastic job of providing actionable detail and adhering to the OSV schema for automated ingestion. Others seem to be releasing vulnerability disclosures somewhat begrudgingly to make the security researcher who reported the vulnerability go away.&lt;/p&gt;

&lt;p&gt;As a producer of software, &lt;strong&gt;it’s important to think about the consumers of your software&lt;/strong&gt; and the implications of a security issue for them. Vulnerability management scales terribly if you are on the consuming side of software dependencies. But if you are an open-source maintainer, you can have an &lt;strong&gt;outsized impact&lt;/strong&gt; by taking the extra time to write a thoughtful, structured, and actionable disclosure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Vulnerability management would be impossible without vulnerability databases to provide a standard taxonomy and inventory of individual security issues. We took a comprehensive look at databases that are relevant to open-source software, and now we have enough information to make some rough comparisons.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2024/09/b6d0ee51b57f47c29d74686ab786593e-2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.gitguardian.com%2Fcontent%2Fimages%2F2024%2F09%2Fb6d0ee51b57f47c29d74686ab786593e-2.png" alt="A diagram of software components&amp;lt;br&amp;gt;
Description automatically generated" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Venn diagram of vulnerability databases coverage&lt;/p&gt;

&lt;p&gt;In terms of raw coverage, paid vulnerability databases have an edge due to the human analysis that happens behind the scenes. However, free projects like OSV and OSS Index still provide up-to-date and quality information on vulnerabilities that is at least as detailed as the original vulnerability disclosures.&lt;/p&gt;

&lt;p&gt;In the long run, adoption and enhancement of open standards like the OSV format will strengthen &lt;em&gt;all&lt;/em&gt; vulnerability databases, which will benefit the software developers and security professionals that consume the information. If you are an open-source software maintainer, please consider how you can update your release and disclosure processes to help your downstream consumers manage their vulnerable dependencies.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>vulnerabilities</category>
      <category>security</category>
    </item>
    <item>
      <title>Best Practices for Securing Infrastructure as Code (IaC) in the DevOps SDLC (cheat sheet included)</title>
      <dc:creator>C.J. May</dc:creator>
      <pubDate>Wed, 26 Apr 2023 16:43:28 +0000</pubDate>
      <link>https://dev.to/gitguardian/best-practices-for-securing-infrastructure-as-code-iac-in-the-devops-sdlc-cheat-sheet-included-1dpk</link>
      <guid>https://dev.to/gitguardian/best-practices-for-securing-infrastructure-as-code-iac-in-the-devops-sdlc-cheat-sheet-included-1dpk</guid>
      <description>&lt;p&gt;Infrastructure as code (IaC) is the practice of managing and provisioning computing resources using configuration files or scripts rather than manual deployment and configuration processes. This enables developers and operations teams to collaborate more effectively, automate deployments, and improve consistency and reliability.&lt;/p&gt;

&lt;p&gt;However, IaC also introduces new security challenges and risks that need to be comprehensively addressed at every stage of the DevOps software development lifecycle (SDLC).&lt;/p&gt;

&lt;p&gt;In this blog post, we will break down every step of the DevOps lifecycle, from planning to post-deployment, and highlight the potential security risks associated with each stage. We will also provide best practices and recommendations for mitigating these risks and ensuring the security of your IaC infrastructure.&lt;/p&gt;

&lt;p&gt;By following these guidelines, you can confidently adopt IaC in your DevOps processes without compromising the security of your applications and data.&lt;/p&gt;

&lt;p&gt;Let's dive in and explore the security challenges and solutions of IaC in the DevOps SDLC!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/files/infrastructure-as-code-cheat-sheet"&gt;Download the cheat sheet!&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Plan
&lt;/h2&gt;

&lt;p&gt;The planning stage involves defining the requirements and design of the infrastructure, as well as identifying the potential threats and vulnerabilities that may affect it. At the planning stage, there are two main things you should be doing to secure your IaC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Threat modeling&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Establishing privileges&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Threat modeling
&lt;/h3&gt;

&lt;p&gt;For threat modeling, it is common to use a standard framework or methodology, such as STRIDE or DREAD, to identify and prioritize the most critical risks in the design of your infrastructure. You can also use tools like &lt;a href="https://learn.microsoft.com/en-us/azure/security/develop/threat-modeling-tool?ref=blog.gitguardian.com"&gt;Microsoft’s Threat Modeling Tool&lt;/a&gt; or &lt;a href="https://owasp.org/www-project-threat-dragon/?ref=blog.gitguardian.com"&gt;OWASP Threat Dragon&lt;/a&gt; to assist you in threat modeling. Consider the use of encryption, hashing, and key management techniques to protect sensitive data and credentials both in transit and at rest. You should also have a plan for handling untrusted input. Additionally, consider how network controls like a WAF can improve your application’s security posture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Establishing privileges
&lt;/h3&gt;

&lt;p&gt;Always follow the principle of least privilege, which means granting only the minimum permissions and access levels required for each resource and account. For user accounts, implement segregation of duties by separating the responsibilities of different team members. Minimizing the power of individual credentials reduces the damage that can be done if a cybercriminal hijacks an account or credential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Develop
&lt;/h2&gt;

&lt;p&gt;The development or coding stage involves writing and updating the code or scripts that define the infrastructure. Some of the security best practices for this stage are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Security-based IDE plugins&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Pre-commit hooks&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Static analysis&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Secrets&lt;/strong&gt; &lt;strong&gt;management&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  IDE plugins
&lt;/h3&gt;

&lt;p&gt;In DevOps, the culture is all about “shifting left,” which means it’s better to catch bugs and security issues sooner rather than later. As a developer, the quickest feedback you can get is right in your IDE while you are writing your IaC. There are various IDE plugins that are capable of identifying vulnerabilities in your code as you write it. A few examples are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://github.com/terraform-linters/tflint?ref=blog.gitguardian.com"&gt;TFLint&lt;/a&gt; – TerraForm linter with some security best-practice rules&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.checkov.io/?ref=blog.gitguardian.com"&gt;Checkov&lt;/a&gt; – misconfiguration scanner for multiple types of IaC&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://snyk.io/product/infrastructure-as-code-security/?ref=blog.gitguardian.com"&gt;Snyk&lt;/a&gt; – code, container, and IaC scanner that offers an IDE plugin&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pre-commit hooks
&lt;/h3&gt;

&lt;p&gt;Pre-commit hooks automate the execution of static code analysis tools &lt;em&gt;before&lt;/em&gt; the code is committed to your version control system. For example, remediating exposed secrets can get messy when the secret is already in the git history of your repository. If you set up a secret scanner as a pre-commit hook, it will catch secrets before they get committed and save you from some extra cleanup work later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/GitGuardian/ggshield?ref=blog.gitguardian.com"&gt;ggshield&lt;/a&gt; is a CLI application that runs in a local environment or in a CI environment to help detect more than 300 types of secrets, as well as other potential security vulnerabilities or policy breaks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Static analysis
&lt;/h3&gt;

&lt;p&gt;Once code has been committed to your version-controlled repository, you can scan the code with static code analysis tools. There are various scanning tools, depending on what you are trying to scan. Some popular IaC static analysis tools are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/GitGuardian/ggshield?ref=blog.gitguardian.com"&gt;ggshield&lt;/a&gt; – yes, the GitGuardian CLI can also be used to scan for infrastructure as code vulnerabilities by running the command:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/aquasecurity/kube-bench?ref=blog.gitguardian.com"&gt;Kube Bench&lt;/a&gt; – Kubernetes configuration scanner based on &lt;a href="https://www.cisecurity.org/benchmark/kubernetes/?ref=blog.gitguardian.com"&gt;CIS Kubernetes Benchmark&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html?ref=blog.gitguardian.com"&gt;Coverity&lt;/a&gt; – static analysis platform similar to Snyk&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Secrets management 
&lt;/h3&gt;

&lt;p&gt;Secrets management is a complex topic in and of itself, but it’s all about making sure your secrets are accessible in a secure way. If you want to learn more about how to be good at secret management, check out our &lt;a href="https://blog.gitguardian.com/a-maturity-model-for-secrets-management/"&gt;Secrets Management Maturity Model&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build and Test
&lt;/h2&gt;

&lt;p&gt;In the building and testing phases, you have the opportunity to see what the infrastructure will look like and how it will behave. These are the key security practices you should be following in this phase of the DevSecOps pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Separation of environments&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Dynamic testing&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Vulnerability scanning&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Container image scanning&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Artifact signing&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Separation of environments
&lt;/h3&gt;

&lt;p&gt;Use a dedicated testing environment that mimics the production environment as closely as possible but with isolated resources and data. Sharing things like databases between environments can lead to production data being put at risk when a vulnerability is introduced to a test environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dynamic testing
&lt;/h3&gt;

&lt;p&gt;Dynamic testing tools perform automated tests on the deployed infrastructure to check its configuration and behavior against the expected security policies and standards. A couple of popular IaC dynamic testing tools are &lt;a href="https://community.chef.io/tools/chef-inspec?ref=blog.gitguardian.com"&gt;InSpec&lt;/a&gt; and &lt;a href="https://terratest.gruntwork.io/?ref=blog.gitguardian.com"&gt;Terratest&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container image scanning
&lt;/h3&gt;

&lt;p&gt;When your applications use container images, it’s important to take inventory of the software that is baked into each image and look for vulnerable, outdated versions. You can scan a newly built image in your CI pipeline with a tool like &lt;a href="https://www.aquasec.com/products/container-vulnerability-scanning/?ref=blog.gitguardian.com"&gt;Aqua&lt;/a&gt; or &lt;a href="https://snyk.io/product/container-vulnerability-management/?ref=blog.gitguardian.com"&gt;Snyk&lt;/a&gt;, but it’s also a good idea to scan your entire container registry on a regular basis to ensure that new vulnerabilities are being noticed when an image isn’t receiving updates. And don't forget about leaked secrets in images' layers!&lt;/p&gt;

&lt;h3&gt;
  
  
  Artifact signing 
&lt;/h3&gt;

&lt;p&gt;When you sign build artifacts such as binaries and container images, you are ensuring the integrity of your services between the time they are built to the time they are deployed. To learn more about why supply chain security is important and how you can implement it, check out our blog on &lt;a href="https://blog.gitguardian.com/supply-chain-security-sigstore-and-cosign-part-ii/"&gt;Supply Chain Security&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;Deploying IaC happens automatically, so there isn’t much involvement from operations at this stage. However, there are still policies you’ll need to follow in your deployment pipeline to ensure that you are meeting best practices when it comes to securely deploying your assets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Immutability&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Inventory management&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Immutability
&lt;/h3&gt;

&lt;p&gt;Once your infrastructure is deployed, you don’t want it to deviate from what is defined in your code. Post-deployment changes can introduce unintended bugs or vulnerabilities. Whenever a change is needed, you should first update your code and then follow the CI/CD process to redeploy the entire infrastructure. If possible, use policies or controls to prevent the modification of your infrastructure after it has been deployed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inventory management
&lt;/h3&gt;

&lt;p&gt;Inventory management is a foundational part of most cybersecurity frameworks. When you commission and decommission assets, your IaC tools should be automatically updating your overall asset inventory so you have an accurate picture of your attack surface. Applying tags to assets is another practice that can help you organize and maintain your inventory. Tags improve your ability to identify configuration drift and deprecated systems that have not been decommissioned properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor
&lt;/h2&gt;

&lt;p&gt;Post-deployment monitoring has historically been the bread-and-butter of security programs, but as deployment environments have changed and shifted to the cloud, there are some new approaches to securing IaC. Nonetheless, the two keys of security monitoring remain the same:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Logging&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Threat detection&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Logging
&lt;/h3&gt;

&lt;p&gt;When provisioning and configuring IaC resources, you should have audit and security logging in place to keep a record of the creation of and access to your infrastructure. Forwarding logs to a SIEM or analysis engine can help you identify anomalies like resources being spun up outside of the normal deployment cycle or configuration changes outside of provisioning (tying back to the importance of immutability).&lt;/p&gt;

&lt;h3&gt;
  
  
  Threat detection
&lt;/h3&gt;

&lt;p&gt;Building runtime threat detection into your IaC is the best way to ensure that you are made aware when the infrastructure you have created is under attack. There are countless security tools to choose from depending on the type of infrastructure you are deploying. There are tools like &lt;a href="https://falco.org/?ref=blog.gitguardian.com"&gt;Falco&lt;/a&gt; to detect anomalies in Kubernetes pods or EDR tools for traditional virtual machine infrastructure. You can also forward additional logs to a SIEM depending on what is needed to enable your detection strategy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Plan
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Threat modeling:&lt;/strong&gt; Use a framework to identify and prioritize risks in the infrastructure design. Consider encryption, hashing, key management techniques, and network controls.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Establishing privileges:&lt;/strong&gt; Follow the principle of least privilege and implement segregation of duties to minimize the power of individual credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Develop
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Security-based IDE plugins:&lt;/strong&gt; Use IDE plugins to catch bugs and security issues sooner rather than later, such as &lt;a href="https://github.com/terraform-linters/tflint?ref=blog.gitguardian.com"&gt;TFLint&lt;/a&gt;, &lt;a href="https://www.checkov.io/?ref=blog.gitguardian.com"&gt;Checkov&lt;/a&gt;, and &lt;a href="https://snyk.io/product/infrastructure-as-code-security/?ref=blog.gitguardian.com"&gt;Snyk&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Pre-commit hooks:&lt;/strong&gt; Automate the execution of static code analysis tools before code is committed to the version control system. Use &lt;a href="https://github.com/GitGuardian/ggshield?ref=blog.gitguardian.com"&gt;ggshield&lt;/a&gt; to detect more than 350+ types of secrets.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Static analysis:&lt;/strong&gt; Scan code with static analysis tools like &lt;a href="https://github.com/GitGuardian/ggshield?ref=blog.gitguardian.com"&gt;ggshield&lt;/a&gt;, &lt;a href="https://github.com/aquasecurity/kube-bench?ref=blog.gitguardian.com"&gt;Kube Bench&lt;/a&gt;, and &lt;a href="https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html?ref=blog.gitguardian.com"&gt;Coverity&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Secrets management:&lt;/strong&gt; Securely manage secrets with appropriate tools. Use  GitGuardian's &lt;a href="https://www.gitguardian.com/files/secrets-management-maturity-model?ref=blog.gitguardian.com"&gt;Secrets Management Maturity Model&lt;/a&gt; if needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Build and Test
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Separation of environments:&lt;/strong&gt; Use a dedicated testing environment that mimics the production environment as closely as possible but with isolated resources and data.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Dynamic testing:&lt;/strong&gt; Use automated tests to check infrastructure configuration and behavior against security policies and standards, such as &lt;a href="https://community.chef.io/tools/chef-inspec?ref=blog.gitguardian.com"&gt;InSpec&lt;/a&gt; and &lt;a href="https://terratest.gruntwork.io/?ref=blog.gitguardian.com"&gt;Terratest&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Container image scanning:&lt;/strong&gt; Take inventory of software that is baked into each image and look for vulnerable, outdated versions. Scan a newly built image in your CI pipeline with tools like Aqua and Snyk.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Artifact signing:&lt;/strong&gt; Sign build artifacts like binaries and container images to ensure their integrity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deploy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Immutability:&lt;/strong&gt; Do not allow post-deployment changes that deviate from what is defined in the code. Use policies or controls to prevent modification of the infrastructure after it has been deployed.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Inventory management:&lt;/strong&gt; Commission and decommission assets, automatically update the asset inventory and apply tags to assets to organize and maintain the inventory.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitor
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Logging:&lt;/strong&gt; Provision and configure IaC resources with audit and security logging to keep a record of creation and access to the infrastructure. Forward logs to a SIEM or analysis engine to identify anomalies.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Threat detection:&lt;/strong&gt; Build runtime threat detection into IaC using tools like &lt;a href="https://falco.org/?ref=blog.gitguardian.com"&gt;Falco&lt;/a&gt; or traditional EDR tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog post, we have discussed some of the best practices and tools for securing IaC at each stage of the DevSecOps software development lifecycle. By following these steps and referencing the cheat sheet, you can improve the security, reliability, and consistency of your IaC throughout your DevOps pipeline.&lt;/p&gt;

&lt;p&gt;If you're interested in diving deeper into infrastructure security with Terraform, be sure to check out our previous blog post. It offers a detailed exploration of Terraform security practices and techniques that you can use to enhance your IaC security further. Don't miss out on this valuable resource!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Securing Containers with Seccomp: Part 2</title>
      <dc:creator>C.J. May</dc:creator>
      <pubDate>Fri, 08 Apr 2022 16:33:45 +0000</pubDate>
      <link>https://dev.to/gitguardian/securing-containers-with-seccomp-a-primer-421h</link>
      <guid>https://dev.to/gitguardian/securing-containers-with-seccomp-a-primer-421h</guid>
      <description>&lt;p&gt;This blog post is the second half of my research on making it easier to create Seccomp-BPF filters and secure your containers with them. If you haven’t read the first half of this blog post yet, I recommend doing so before continuing here.&lt;/p&gt;

&lt;p&gt;For returning readers, I’ll briefly refresh you on my last post. I introduced Seccomp-BPF, which is a Linux kernel feature that allows you to create filters that restrict which syscalls a process can make. This allows you to prevent a process from being able to do something that it was not intended to do if it requires a new syscall. Seccomp-BPF filters can also be applied to container runtimes and orchestration tools to limit what your containers can do while they’re running. That gives you an opportunity to add an additional layer of protection against the potential exploitation of your applications and services.&lt;/p&gt;

&lt;p&gt;The biggest roadblock with learning about and popularizing the use of this awesome feature has been the lack of tools and processes that bridge a low-level kernel feature like Seccomp-BPF to modern software development practices. In my last article, I explored a tool created by Red Hat called &lt;a href="https://github.com/containers/oci-seccomp-bpf-hook"&gt;oci-seccomp-bpf-hook&lt;/a&gt;. It helps simplify the creation of Seccomp filters by recording syscalls made by a container during runtime and pumping them into a filter that you can use with Seccomp. The OCI hook dramatically reduces the knowledge about syscalls that you’d need to create a Seccomp filter, but there’s still quite a bit of overhead involved in updating your Seccomp filter with the tool every time you update your code.&lt;/p&gt;

&lt;p&gt;To solve that problem, &lt;strong&gt;I’m going to show you how to automate the creation of a Seccomp filter for your app every time your code gets updated&lt;/strong&gt;. We’re going to create a Continuous Integration (CI) workflow via GitHub Actions that utilizes Red Hat’s oci-seccomp-bpf-hook. Let’s dig in!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Example Application
&lt;/h2&gt;

&lt;p&gt;To paint a picture of what this automation looks like in the context of modern software development, I created an example application. The application is a simple Python flask API service. You can find all the example and workflow code in my &lt;a href="https://github.com/lawndoc/seccomp-ci-demo"&gt;GitHub repo&lt;/a&gt;, and you can fork the project if you want to test it yourself.&lt;/p&gt;

&lt;p&gt;The REST API has two endpoints. One is at /dummy and is just a dummy function that responds successfully if you send it “valid input :)”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/dummy"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;dummy&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="s"&gt;""" Dummy function for testing """&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;missingParams&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;missing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"input"&lt;/span&gt;&lt;span class="p"&gt;]):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;missingParams&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"input"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"valid input :)"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"success"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"nice input"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"bad input"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API’s second endpoint was made to be intentionally vulnerable for the purpose of this demonstration. The vulnerable API call is at /command and will run any command sent to it in a shell subprocess.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/command"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;command&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="s"&gt;""" Execute the command and return the output """&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;missingParams&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;missing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;]):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;missingParams&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;
    &lt;span class="n"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;process&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Popen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shell&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;subprocess&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PIPE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;subprocess&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PIPE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stderr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;communicate&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"stdout"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"utf-8"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="s"&gt;"stderr"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"utf-8"&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Dockerfile was also created for the application so it can be run in a container and used with oci-seccomp-bpf-hook. Now that we have a containerized application to generate a Seccomp filter for, let’s work on automating the creation of the filter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Actions Runner
&lt;/h2&gt;

&lt;p&gt;The first thing we need to do is set up our self-hosted Actions Runner on a Fedora server. If you remember from the first blog post, oci-seccomp-bpf-hook was made to run with Podman, Red Hat’s container runtime. GitHub’s default automation runners are Ubuntu-based, and even though we could use them and install podman and the OCI hook from source, it’s a lot easier, in the long run, to use our own Fedora server with the tools pre-installed. This shouldn’t be a problem to set up in a professional software development environment.&lt;/p&gt;

&lt;p&gt;If you want to quickly stand up a build server to test this tutorial, you can use a virtual machine, an old desktop/laptop, or even a Raspberry Pi to host your Fedora server. You just need to make sure it’s connected to the internet when your automation kicks off.&lt;/p&gt;

&lt;p&gt;Once you have a Fedora server, you’ll need to create a secure account on it for your GitHub Actions to use when running. First, let’s become the root user to simplify the rest of the setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we’ll create an unprivileged account with a home directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;useradd -m github
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And create a secure password for the account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;passwd github
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we need to make sure that the ‘github’ user has the proper permissions to run the programs it needs. To do that, we’ll use ‘visudo’ to edit the /etc/sudoers file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;visudo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following line to the bottom of the sudoers file to allow the user ‘github’ to execute dnf and podman as root without needing a password:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;github ALL=(ALL) NOPASSWD: /usr/bin/podman, /usr/bin/dnf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reason we must not require a password is because GitHub actions can’t enter a password while automating our steps. Once we have the sudoers entry saved, we are done with our Fedora setup for now.&lt;/p&gt;

&lt;p&gt;Next, it’s time to set up the Actions runner. For this part, you’ll need to make sure that your repository is set to Private rather than Public. For security reasons, GitHub doesn’t allow you to use self-hosted runners on public repositories unless you are an organization. To begin setting up our self-hosted runner, we’ll need to go to our GitHub repository and click “Settings.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DN3_tCId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/djfmtq8rqv13qjbqruv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DN3_tCId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/djfmtq8rqv13qjbqruv8.png" alt="settings-screenshot" width="868" height="57"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the repository settings, go to Actions &amp;gt; Runners. From there, click the green “New self-hosted runner” button in the top right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MiC_U1Od--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t79zlguz27tzuucq02oh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MiC_U1Od--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t79zlguz27tzuucq02oh.png" alt="runners screenshot" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select Linux as the operating system, and select the CPU architecture that your Fedora server is running on. If you are using a Raspberry Pi, you’ll need to do ARM64. Otherwise, you are probably using x64. After that, we need to follow GitHub’s instructions on our Fedora server as the ‘github’ user that we just set up. To switch accounts, you just need to run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;su github
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rh_0vzGg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwgcq4bpku3aejqlvt0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rh_0vzGg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwgcq4bpku3aejqlvt0d.png" alt="runner instructions screenshot" width="816" height="1000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When configuring the GitHub runner, press Enter to accept both the default runner group and the default runner name. When it asks if you want to add additional labels, type ‘fedora’ and then hit Enter. Lastly, press Enter to accept the default work folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OgtKdVfZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5qov1uk24z3ky4h1to08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OgtKdVfZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5qov1uk24z3ky4h1to08.png" alt="terminal screenshot" width="847" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After configuring the runner, it should be registered with our GitHub repo. We need to make sure to finish the instructions from GitHub and launch the runner with the provided script. The ‘run.sh’ script will make the runner available for Actions to be deployed to it. Once the runner is all set up, we’re ready to set up our automation!&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Seccomp Filter Generation Workflow
&lt;/h2&gt;

&lt;p&gt;GitHub Actions are kept in a specific folder of your repository at {root}/.github/workflows. You need to define each workflow in the YAML file format. In our example repo, I created a file called “seccomp.yml” and put it in the Actions folder. This is how the file starts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Generate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Seccomp&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Filter"&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths-ignore&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;seccomp.json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the top of the file, we name our workflow “Generate Seccomp Filter” and specify when our automation should kick-off. We are going to run the workflow every time we push code to the GitHub repo, except when the only change is the “seccomp.json” file. The exception prevents GitHub Actions from running the workflow again after generating the Seccomp filter.&lt;/p&gt;

&lt;p&gt;Next, we start to define the actions that need to happen in our workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generate-seccomp-filter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fedora&lt;/span&gt;  &lt;span class="c1"&gt;# self-hosted&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;“Generate-seccomp-filter” is the name of the action we are about to create, and we specify that the action runs on our self-hosted Fedora server that we set up earlier. The last thing to do is to define the steps we want to take during our automation. First, we want to check out our code, make sure we have the tools we need installed and build the container image that holds our flask API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check out repository&lt;/span&gt;
    &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install tools&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sudo dnf install podman oci-seccomp-bpf-hook&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build Container&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sudo podman build --tag demo -f ./Dockerfile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to run the container and record the syscalls it makes with the OCI hook… But wait, we’re missing something… How do we make sure that our application makes all the syscalls that it needs while we are recording? If we just launch our API server, that doesn’t necessarily record the syscalls that are made when an API endpoint is called. Fortunately, there’s a solution to this problem. We’ll run our unit and functional tests while recording syscalls with the OCI hook!&lt;/p&gt;

&lt;p&gt;A mature software development lifecycle includes unit tests written for the application to ensure code quality. We can piggyback off that effort and use the automated tests to make sure we are hitting all the functionality of our application while we record syscalls. For our example app, I wrote unit tests using pytest and have a script that kicks them off. In our Action workflow, we can run the tests in our container, capture the syscalls in a Seccomp filter, and commit the filter back to our GitHub repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Record syscalls&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sudo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;podman&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--annotation&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;io.containers.trace-syscall=of:/tmp/seccomp.json&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;demo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/opt/demo/test.sh"&lt;/span&gt;

&lt;span class="c1"&gt;# Commits updated seccomp filter back to the same branch (commit || echo to not fail on no changes)&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Commit seccomp filter to repo&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;cp /tmp/seccomp.json .&lt;/span&gt;
    &lt;span class="s"&gt;git config --global user.name "github-runner[bot]"&lt;/span&gt;
    &lt;span class="s"&gt;git config --global user.email "github-    runner[bot]@users.noreply.github.com"&lt;/span&gt;
    &lt;span class="s"&gt;git add .&lt;/span&gt;
    &lt;span class="s"&gt;git commit -m "generated new seccomp filter" || echo&lt;/span&gt;
    &lt;span class="s"&gt;git push origin ${GITHUB_REF##*/} -f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s the end of our seccomp.yml workflow file. Now every time we push new code to our GitHub repository, it will generate a Seccomp filter for our app! It required some setup to get to this point, but now that it’s done there’s no more effort required to keep an up-to-date Seccomp profile for our application. Below you can see a screenshot from my example repo of the workflow running successfully after I updated my repository’s README.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h4gZQFOP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g6c0icizkppmcrojdles.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h4gZQFOP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g6c0icizkppmcrojdles.png" alt="github actions screenshot" width="880" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You now have everything you need to automatically create your own application-specific syscall whitelist filters. Running your applications with the generated Seccomp-BPF filters will dramatically reduce the risk of them being successfully exploited. The best part of this demonstration is that the GitHub Actions workflow that we created isn’t specific to my example app – it’s mostly generic and can be easily modified for any other project. You only need to make sure that your ‘podman build’ and ‘podman run’ steps reflect the way you build your container image and run your tests. Other than that, you just need to keep in mind one requirement.&lt;/p&gt;

&lt;p&gt;The important thing to remember when generating Seccomp filters for your application is that you NEED to have a very high percentage of your code getting executed in your automated unit and functional tests. If your application has some functionality that isn’t being executed when you are generating the Seccomp filter, there’s a chance that you could be missing some syscalls in your filter. This means they will get blocked when your code tries to run with the Seccomp filter applied.&lt;/p&gt;

&lt;p&gt;Luckily, there’s a tip-off that will indicate that your application isn’t working because of Seccomp. If Seccomp is preventing your app from making a needed syscall, the error that your application throws will always include a message about not having permission to do something. In that case, you just need to figure out which part of your code you missed in your automated testing. Then you add a test for the missed case, and your new Seccomp filter will be generated as soon as you push the new test to your repo. There are tools out there that help you monitor your test coverage, and I recommend you use them if you are generating Seccomp filters for your app.&lt;/p&gt;

&lt;p&gt;That concludes the demonstration on how to automate the generation of Seccomp filters. Because there aren’t many resources out there for Seccomp yet, this is a new concept for most developers and security professionals. However, the security gained from this practice could be a game-changer for businesses trying to protect their mission-critical services. The technology already exists to make Seccomp a standard. I hope that my explanations and examples have made it easier to dive into this new way of securing your applications! &lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>security</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Securing Containers with Seccomp: Part 1</title>
      <dc:creator>C.J. May</dc:creator>
      <pubDate>Thu, 31 Mar 2022 16:55:54 +0000</pubDate>
      <link>https://dev.to/gitguardian/securing-containers-with-seccomp-part-1-98c</link>
      <guid>https://dev.to/gitguardian/securing-containers-with-seccomp-part-1-98c</guid>
      <description>&lt;p&gt;In this article we present a novel way to protect your container applications post-exploitation. This additional protection is called Seccomp-BPF.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.gitguardian.com%2Fcontent%2Fimages%2Fsize%2Fw2000%2F2022%2F03%2F22W10-blog-secure-containers-with-seccomp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.gitguardian.com%2Fcontent%2Fimages%2Fsize%2Fw2000%2F2022%2F03%2F22W10-blog-secure-containers-with-seccomp.png" alt="intro-image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lots of businesses are adopting containers as a foundational technology used to manage and run their applications. If you’ve worked much with containers, it’s easy to see why: they enable entirely new levels of portability and scalability. But the adoption of containers, like any other new technology, also means new ways to exploit applications.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Depending on the container’s configuration, an exploited application can eventually lead to the compromise of the host that the container is running on. There are also other implications to consider, such as potential secrets stored as environment variables in the container and what they have access to. If you want to know more about Docker containers security best practices specifically, GitGuardian proposes a useful cheat sheet:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/how-to-improve-your-docker-containers-security-cheat-sheet/" rel="noopener noreferrer"&gt;Download the cheat sheet&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A mature software development lifecycle already includes security processes such as vulnerability scanning and software composition analysis, &lt;strong&gt;but there is a need for more&lt;/strong&gt;. Most available application security technology exists to prevent an application from being vulnerable, but not many will contain the damage that can be done when an application is successfully exploited. To help with that, I’ve been researching a novel way to protect your container applications &lt;strong&gt;post-exploitation&lt;/strong&gt;. In this post, I’ll be sharing what it is and how it can be seamlessly integrated into your software development processes that are already established. The additional protection I’m referring to is called Seccomp-BPF, and I need to explain a little about what it is before diving into how to use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;The programs that we run on computers rely heavily on the underlying operating system to do anything. Tasks like opening files and spawning new processes are abstracted in modern programming languages, but under the hood, the code is &lt;strong&gt;making kernel requests called system calls&lt;/strong&gt; (or syscalls). How important are syscalls for a program to function? Well, there are around 400 syscalls available in the Linux kernel, and even a basic “Hello World” program written in C makes 2 of them: write and exit.&lt;br&gt;
Code running in so-called “user space” can’t do anything without going through the kernel to do it. Eventually, some smart Linux kernel developers decided to use that fact to create a powerful security feature. In July 2012, Linux version 3.5 was released which added support for something called &lt;a href="https://www.kernel.org/doc/html/v4.16/userspace-api/seccomp_filter.html" rel="noopener noreferrer"&gt;Seccomp-BPF&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Seccomp-BPF is a Linux kernel feature that allows you to restrict the syscalls that a process can make by creating a special filter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;In theory, you can create a Seccomp-BPF filter that only allows a process to make the exact syscalls that it needs to function and nothing more&lt;/strong&gt;. This would be useful in cases where an app is accidentally exploitable in a way that allows an adversary to spawn additional processes. If Seccomp isn’t allowing the process to make new syscalls, there’s a good chance it could thwart the attacker.&lt;/p&gt;

&lt;p&gt;Seccomp is super cool, and &lt;strong&gt;it’s even integrated into container runtime and orchestration tools like Docker and Kubernetes&lt;/strong&gt;. It begs the question: “Why isn’t Seccomp widely used?” I think the answer is that there aren’t enough resources out there that bridge the gap between a low-level kernel feature like Seccomp and modern software development processes. Not every organization has a low-level code developer who knows a ton about syscalls. There’s also the overhead of figuring out which syscalls your program needs and updating that with every new feature you implement in your code.&lt;/p&gt;

&lt;p&gt;I was thinking about how to solve that problem, and I thought of an idea: “What if we record the syscalls that a program makes while it’s running?” I was telling one of my co-workers about my idea, and the next day he sent me a link to a tool he found on GitHub. It turned out that some folks at Red Hat had already made a tool called &lt;a href="https://github.com/containers/oci-seccomp-bpf-hook" rel="noopener noreferrer"&gt;oci-seccomp-bpf-hook&lt;/a&gt; that does exactly what I wanted!&lt;/p&gt;
&lt;h2&gt;
  
  
  Creating a Seccomp-BPF Filter
&lt;/h2&gt;

&lt;p&gt;The tool oci-seccomp-bpf-hook was made to work with Linux containers. OCI stands for “Open Container Initiative,” and it’s a set of standards for container runtimes that defines what kinds of interfaces they should be able to provide. OCI-compliant container runtimes (like Docker) provide a mechanism called “hooks” that allows you to run code before a container is spun up and after a container is torn down. Rather than explain how Red Hat’s tool uses these hooks, I think a demonstration will be clearer.&lt;/p&gt;

&lt;p&gt;Red Hat developed oci-seccomp-bpf-hook for use with their container runtime, &lt;a href="https://podman.io/" rel="noopener noreferrer"&gt;podman&lt;/a&gt;. Podman is backward-compatible with Docker, for the most part, so the syntax in my examples will look mostly familiar if you’ve used Docker. Additionally, the oci hook is currently only available in Red-Hat-related dnf repositories unless you install it from source. To make things less complicated for this demo, I’m just using a Fedora server (if you don’t have a Fedora environment, I recommend running a Fedora virtual machine on something like Virtualbox or VMware to follow).&lt;/p&gt;

&lt;p&gt;The first thing you’ll need to do to start using oci-seccomp-bpf-hook is to make sure you have it installed along with podman. To do that, we can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install podman oci-seccomp-bpf-hook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have podman and the oci hook, we can finally dive into how to generate a Seccomp-BPF filter. From the &lt;a href="https://github.com/containers/oci-seccomp-bpf-hook" rel="noopener noreferrer"&gt;readme&lt;/a&gt;, the syntax is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo podman run --annotation io.containers.trace-syscall="if:[absolute path to the input file];of:[absolute path to the output file]" IMAGE COMMAND
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s run the ls command in a basic container and pipe the output into /dev/null. While we’re doing that, we’re going to be recording the syscalls that the ls command makes and saving them to a file at /tmp/ls.json.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo podman run --annotation io.containers.trace-syscall=of:/tmp/ls.json fedora:35 ls / &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we are piping the output of the ls command to /dev/null, there should be no output in the terminal. But after the command is done, we can look at the file that we saved the syscalls to. There we see that the command did work, and the syscalls were captured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /tmp/ls.json
{"defaultAction":"SCMP_ACT_ERRNO","architectures":["SCMP_ARCH_X86_64"],"syscalls":[{"names":["access","arch_prctl","brk","capset","chdir","close","close_range","dup2","execve","exit_group","fchdir","fchown","fstatfs","getdents64","getegid","geteuid","getgid","getrandom","getuid","ioctl","lseek","mmap","mount","mprotect","munmap","newfstatat","openat","openat2","pivot_root","prctl","pread64","prlimit64","pselect6","read","rt_sigaction","rt_sigprocmask","seccomp","set_robust_list","set_tid_address","sethostname","setresgid","setresuid","setsid","statfs","statx","umask","umount2","write"],"action":"SCMP_ACT_ALLOW","args":[],"comment":"","includes":{},"excludes":{}}]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file is our Seccomp filter, and we can now use it with any container runtime that support supports it. Let’s try using the filter with the same containerized ls command that we just ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo podman run --security-opt seccomp=/tmp/ls.json fedora ls / &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There’s no output or errors, indicating that the command was able to successfully run with the Seccomp filter applied. Now comes the fun part. We will add some capability to the container that wasn’t present when we recorded the syscalls to make our Seccomp filter. All we’re going to do is add the “-l” flag to our ls command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo podman run --security-opt seccomp=/tmp/ls.json fedora ls -l / &amp;gt; /dev/null
ls: /: Operation not permitted
ls: /proc: Operation not permitted
ls: /root: Operation not permitted
…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, we now get a bunch of errors telling us that we can’t perform some operation that our command was trying to do. The addition of the “-l” flag to our ls command added a few new syscalls to the process that weren’t in our Seccomp filter’s allow list. If we generate a new Seccomp filter with the ls -l command, we can see that the new filter works because it now has all the required syscalls.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo podman run --annotation io.containers.trace-syscall=of:/tmp/lsl.json fedora ls -l / &amp;gt; /dev/null

sudo podman run --security-opt seccomp=/tmp/lsl.json fedora ls -l / &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, applying Seccomp filters to your containers greatly restricts its capabilities. In a scenario where an attacker can exploit your application, it may stop them from doing damage or even prevent exploitation altogether.&lt;/p&gt;

&lt;p&gt;By using Red Hat’s OCI hook, you no longer need to have a deep knowledge of the Linux kernel’s syscalls to create a Seccomp filter. You can easily create an application-specific filter that &lt;strong&gt;doesn’t allow your container to do anything more than what it needs to be able to do&lt;/strong&gt;. This is a huge step in bridging the gap between the kernel feature and high-level software development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As great as oci-seccomp-bpf-hook is, the tool alone doesn’t fully live up to my expectations for integrating Seccomp into a mature software engineering workflow. There is still overhead involved in running the tool, and as a software developer, you don’t want to spend time manually updating your Seccomp filter for every update of your application. To bridge that final gap and make it as easy as possible to use Seccomp in enterprise applications, &lt;strong&gt;we need to find a way to automate the generation of Seccomp-BPF filters&lt;/strong&gt;. Fortunately, when we look at how modern software development happens, there is already a perfect place for this automation to happen: &lt;strong&gt;during Continuous Integration (CI)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;CI workflows are already a well-established part of a mature software development lifecycle. For those that aren’t familiar with CI, it enables you to do things like automated unit testing and code security scanning every time you commit code to your git repository. There are lots of tools for CI out there, so it’s the perfect place to automate the generation of a Seccomp filter for your containerized application.&lt;/p&gt;

&lt;p&gt;We are running out of time for this post, so I’ll be back in another post with a demonstration of how to create a CI workflow that generates a Seccomp filter every time you update your code. Then you will finally be equipped to take advantage of Seccomp’s syscall restriction and secure your applications!&lt;/p&gt;

&lt;p&gt;To be continued…&lt;/p&gt;

</description>
      <category>security</category>
      <category>cicd</category>
      <category>devops</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Best practices: 5 Risks to Assess for a Secure CI Pipeline</title>
      <dc:creator>C.J. May</dc:creator>
      <pubDate>Wed, 26 Jan 2022 04:09:11 +0000</pubDate>
      <link>https://dev.to/gitguardian/best-practices-5-risks-to-assess-for-a-secure-ci-pipeline-5781</link>
      <guid>https://dev.to/gitguardian/best-practices-5-risks-to-assess-for-a-secure-ci-pipeline-5781</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally posted on &lt;a href="https://blog.gitguardian.com/ci-pipelines-5-risks-to-assess/"&gt;GitGuardian's blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;C.J. May&lt;/em&gt;&lt;br&gt;
Information security professional and passionate programmer&lt;br&gt;
with broad interests encompassing many areas of IT.&lt;br&gt;
&lt;a href="https://twitter.com/lawndoc"&gt;&lt;em&gt;Twitter&lt;/em&gt;&lt;/a&gt; | &lt;a href="https://github.com/lawndoc"&gt;&lt;em&gt;GitHub&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vBia38by--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.gitguardian.com/content/images/size/w2000/2022/01/21W41-Blog-Banner-5-risks-CI-pipelines-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vBia38by--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.gitguardian.com/content/images/size/w2000/2022/01/21W41-Blog-Banner-5-risks-CI-pipelines-1.png" alt="gear icon" width="880" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the computing world continues to develop new processes for creating software, criminals continue to evolve their own techniques that exploit the flaws in those processes. DevOps is the latest trend in software development, and it is characterized by high levels of automation. More and more parts of the software development process can occur without human intervention which speeds up development. However, this is not without its drawbacks.&lt;/p&gt;

&lt;p&gt;Less human involvement means less oversight from start to finish, and it also means more technologies to potentially exploit or abuse. Most of the risk involved is related to using sensitive information within automation, allowing for several ways to steal secrets. There are also things like code tampering to worry about. To keep your code and secrets safe, you should add the following security practices to your CI pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Third-party workflows
&lt;/h3&gt;

&lt;p&gt;From the Codecov attack, we can see the impact that a single workflow can have on the security of our overall CI pipeline. Supply chain attacks are on the rise, and supply chain risk is a whole topic in and of itself (learn more about it here). Right now, it’s more important than ever to give extra attention to the things you are exposing your code and infrastructure to.&lt;/p&gt;

&lt;p&gt;It’s common to use 3rd party workflows in a CI pipeline, but you need to understand that you are trusting that 3rd party with your code and potentially your secrets when you do so. If possible, you should look at the source code of the images and workflows you are considering using in your CI pipeline. If the workflow is tracked with version control, you could also periodically review changes to ensure that nothing suspicious has slipped in.&lt;/p&gt;

&lt;p&gt;Due to time or availability of the source code, this may not always be possible. However, you need to weigh that against accepting the risk involved in blindly trusting a 3rd party’s workflow. At the very least, you should look for workflows that are either published by a trusted party or are popular and widely trusted by others. There will always be some risk involved in trusting a 3rd party, which reiterates the importance of taking measures that limit the damage that can be done.&lt;/p&gt;

&lt;h3&gt;
  
  
  Access control
&lt;/h3&gt;

&lt;p&gt;The next thing you should consider when securing your CI pipeline is access control. Using key management systems, you can use features like role-based access control to determine who can use which secrets. You should always follow the principle of least privilege when determining who should have access to secrets. Fine-tuning access to the key management system level is very important; however, it’s not the only point of risk in a secret’s lifecycle.&lt;/p&gt;

&lt;p&gt;Consider this scenario: you maintain a popular open-source repository on GitHub, and you get pull requests somewhat routinely. To help you save time in evaluating the code that is submitted, you kick off some automated tests when someone submits a new pull request.&lt;/p&gt;

&lt;p&gt;One day someone submitted malicious code in a pull request that was grabbing environment variables and shipping them out to the author’s server. They also added a test to make sure their code would run when your CI pipeline ran. As soon as they submit the pull request, the pipeline kicks off and all the environment variables in your CI environment are stolen. Oops.&lt;/p&gt;

&lt;p&gt;The above example isn’t even the worst-case scenario. You can read about similar scenarios like the one in this blog. In that instance, the attacker was able to pivot out of the testing environment and steal even more valuable information.&lt;/p&gt;

&lt;p&gt;The last thing you want is an untrusted party being able to run arbitrary code in your build environment. It’s not enough that your secrets are protected in a key management system. You also need to make sure that you trust the code that will be using those secrets. Especially if you are an open-source maintainer, don’t run any workflows on pull requests until you have read the code yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hashing/signing builds
&lt;/h3&gt;

&lt;p&gt;Hashing builds is more important for the security of your users than it is for your own code, but taking the concept a bit further can benefit you in some ways as well. As an example, let’s examine one of the biggest cyberattacks from 2020: the SolarWinds supply-chain attack.&lt;/p&gt;

&lt;p&gt;SolarWinds is a leading provider of network monitoring and management tools that are used by countless organizations. In a long-term campaign, an adversary infected SolarWinds’ build servers with a custom piece of malware called Sunspot. Sunspot was a very advanced malware that monitored running processes on the build servers, and it specifically was looking for processes involved in the compilation of SolarWinds’ Orion product.&lt;/p&gt;

&lt;p&gt;When Sunspot would see Orion being compiled on the build server, it would inject extra code, later dubbed “Sunburst,” into the compiled software. Sunburst was a backdoor that the attackers used to get access to every organization that ran Orion. You can read more about the SolarWinds attack on CrowdStrike’s blog.&lt;/p&gt;

&lt;p&gt;So, what is our takeaway from this? Because the build itself was being infected, it was being signed by SolarWinds and was actually part of the product they were releasing. In order to detect such an attack in the future, SolarWinds has stated that they are investigating ways to validate concurrent builds against each other. This could be a challenge given the sensitivity of hashing algorithms, but it is still something to explore to be able to detect build injections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Protecting your build environment
&lt;/h3&gt;

&lt;p&gt;From the SolarWinds attack, we can clearly see that the build environment isn’t just a developer sandbox — it’s an extremely sensitive asset that requires protection. Sunspot was able to stay on SolarWinds’ build servers for a long time, but there were definitely IOCs (indicators of compromise) that could have given away the attackers’ presence.&lt;/p&gt;

&lt;p&gt;Without getting too deep into computer and network defense, there are some general best practices for securing your build servers. First, make sure that you have proper logging enabled on your build servers and forward them to a SIEM or log aggregator. Second, consider deploying an EDR agent to your build servers to get added telemetry and alerts when something suspicious happens. Third, utilize network prevention and detection tools, and treat your build servers with scrutiny.&lt;/p&gt;

&lt;p&gt;There are too many potential layers to protecting computers to list all of them, so we will stop there. If you don’t have much experience in the computer and network defense side of security, reach out to some experts who can provide additional support or guidance. You can also look at a framework like NIST’s Cybersecurity Framework to get you pointed in the right direction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Secrets management
&lt;/h3&gt;

&lt;p&gt;If you’re an experienced software engineer or security professional, you’ve probably heard of API keys leaking from public code repositories. Maybe you’ve even experienced your own secrets getting leaked after accidentally committing them to an open-source project. Depending on the type of secret that was leaked, it could end up being a costly mistake.&lt;/p&gt;

&lt;p&gt;The best way to protect your secrets is to practice good secrets management. A good start is to use secret management tools like Azure Key Vault or Amazon KWS that provide secure storage and identity-based access (learn more here). Using GitHub’s built-in repository secrets manager also works well depending on your use case, but it isn’t as feature-rich as a true key management service.&lt;/p&gt;

&lt;p&gt;Another must-have for secrets management is a tool that can tell you right away if you accidentally commit a secret to your codebase. There are some different options out there, but secret detection is GitGuardian ‘s specialty. It has hundreds of built-in secret detectors and is free for open-source projects. Knowing right when you accidentally expose your secrets is crucial in protecting yourself and your code.&lt;/p&gt;

&lt;p&gt;Even with the above practices in place, there’s still no guarantee that your secrets are safe. A while back, Codecov’s Docker image was quietly modified to leak the secrets of anyone who used it in their CI pipeline for testing. You can read about it on here, but this impacted tens of thousands of their customers/users.&lt;/p&gt;

&lt;p&gt;Even though something like the Codecov attack is impossible to prevent by yourself, you can still limit the impact it would have on your organization. If possible, use different secrets in production than you use in your CI pipeline. That way if something like Codecov’s compromised workflow steals your keys, it won’t affect your production environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Unfortunately, cyber adversaries are always evolving their techniques, and the challenge of protecting against novel attacks will always be there. The CI pipeline is among the most newly targeted assets, and there is a lot of opportunity due to the lack of human involvement. Following all these security practices will greatly improve the security of your CI pipeline, but make sure you are always keeping up to date on the new trends in attackers’ techniques as well.&lt;/p&gt;

</description>
      <category>ci</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>CISO Roadmap: The First 90 Days</title>
      <dc:creator>C.J. May</dc:creator>
      <pubDate>Mon, 22 Nov 2021 20:28:14 +0000</pubDate>
      <link>https://dev.to/gitguardian/ciso-roadmap-the-first-90-days-efl</link>
      <guid>https://dev.to/gitguardian/ciso-roadmap-the-first-90-days-efl</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://blog.gitguardian.com/ciso-roadmap-first-90-days/"&gt;GitGuardian's blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;You may have found this article for several reasons. Maybe you just landed a big promotion to CISO/CSO or you’re looking to become one soon. You could have been a CISO or security lead for a while now and are looking for new ideas. Whatever brought you here, you will come away with a game plan for strengthening your information security program.&lt;/p&gt;

&lt;p&gt;First and foremost, a CISO is an advocate for good information security practices within the business. Stating it like that makes it sound simple, but there are so many responsibilities and considerations that go into a complete information security program. Security considerations go far beyond just technical controls. The many domains of cyber security could be listed along with key components of each, but it’s much easier to provide a graphic published by current CISO that organizes them into a map.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bOHjpbat--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1750/0%2ACz8duSloy6vDEiL9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bOHjpbat--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1750/0%2ACz8duSloy6vDEiL9.jpg" alt="domain map image" width="880" height="506"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/pulse/cybersecurity-domain-map-ver-30-henry-jiang"&gt;Cybersecurity Domain Map&lt;br&gt;
Henry Jiang, Cybersecurity Domain Map ver 3.1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There’s a lot going on in this graphic, so let’s break it down. For the most part, it shows that most parts of cyber security can be grouped into the following broader domains: &lt;strong&gt;physical security, governance, risk assessment, enterprise risk management, threat intelligence, security architecture, user education, security operations.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each broader domain has its own practices and considerations that get more granular. From the graphic above, it’s clear there are many things to take into consideration for an information security program. As a CISO/CSO you need to understand the purpose of each responsibility or control so you can effectively weigh the cost of implementing a program against the risk it is designed to mitigate.&lt;/p&gt;

&lt;p&gt;This leads to the most overlooked side of a CISO’s role: their responsibility to the business and its profit. As an executive-level role, a CISO has a seat at the table that is making the most critical business decisions. In that position, you need to be the mediator between profit and risk. At some point, additional security controls or practices may be cost or productivity prohibitive which can adversely affect the business. You need to be able to recognize when your business needs growth in an area of information security, and you also need to be able to communicate your decisions effectively to the other executives and the board.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--koBGwait--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1750/0%2AzlxdoEDl9JaR_0Mm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--koBGwait--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1750/0%2AzlxdoEDl9JaR_0Mm.png" alt="quote image" width="880" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With all that responsibility, it might feel overwhelming to try to figure out where to begin. To help narrow things down and guide future action, &lt;strong&gt;you should focus on three main things in your first 90 days.&lt;/strong&gt; First, you should understand the other leaders in your company and their awareness of and appetite for security. Second, you should plan to drive a security-minded culture within your organization. And lastly, you, of course, need to evaluate the technical side of your business’s current information security program. With all of those covered, you should be ready to tackle some big positive changes for your company.&lt;/p&gt;

&lt;h3&gt;
  
  
  Leadership’s relationship with information security
&lt;/h3&gt;

&lt;p&gt;Information security solutions and programs need to be properly manned and implemented, and to that end you need to be able to effectively communicate the business’s security needs to leadership. People who don’t have a background in information security may have widely varying thoughts about it. &lt;strong&gt;Business executives are no exception.&lt;/strong&gt; One of the first things you should do as CISO is gauge your board of directors’ initial concerns and priorities.&lt;/p&gt;

&lt;p&gt;During the interview process to become a CISO, you will probably already get a pretty good idea of your business leaders’ awareness of information security risks. Odds are they are aware of big threats like ransomware but are they concerned about other topics like &lt;strong&gt;developing secure software&lt;/strong&gt; or &lt;strong&gt;third-party risk&lt;/strong&gt;? Ransomware clearly poses an immediate financial risk to a company, but things like brand perception, laws/regulations, and intellectual property loss are also likely to be impacted by various types of cyber threats. Understanding your business’s concerns can help indicate how they would currently prioritize investment in information security.&lt;/p&gt;

&lt;p&gt;Knowing your leaders’ biggest priorities in terms of data protection is important for a couple of reasons. First, it can add to your perspective about your company’s &lt;strong&gt;most valuable information&lt;/strong&gt; and the current comfort level in how well it is protected. If you are new to your business, you may not have a clear view yet of the data residing in your organization and the current processes to protect that data.&lt;/p&gt;

&lt;p&gt;The other reason it’s important to know your leaders’ priorities is to be able to take that into account when pushing for investment in information security. For example, you may find that your risk in an area like application security doesn’t match up with other leaders’ thoughts about how important it is to prioritize improving it. That’s when you know you need to devote more time to changing perspective.&lt;/p&gt;

&lt;p&gt;With all of this in mind, you can start charting out some action items to take care of in the coming months. To know what risks your executives are thinking about, you should periodically ask questions like, “What are your biggest fears right now regarding cyber security?” or, “Based on what I have been sharing, how do you feel about how we are addressing our cyber risk in X area?” Hopefully, these questions lead into broader discussions where &lt;strong&gt;you get to weigh in with your knowledge of the current threat landscape&lt;/strong&gt; and additional risks that may not be currently considered. Once you’ve recognized the areas you need to push more, you need to acquire the means to do so by setting up pipelines for all sorts of metrics. This will take some time, but insightful metrics are the most effective tool you have for communicating risk and needs to leaders of other business areas. As the highest advocate for security, you need to be able to clearly demonstrate when there is a need that needs to be met.&lt;/p&gt;

&lt;h3&gt;
  
  
  The business’s relationship with information security
&lt;/h3&gt;

&lt;p&gt;An often-overlooked part of information security is the general relationship with security that employees have. Every security professional understands the important role individual employees have in protecting the company from breaches and mistakes that can lead to information leakage. An employee’s relationship security can really be broken down into two parts: &lt;strong&gt;awareness&lt;/strong&gt; and &lt;strong&gt;unity&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There are various ways to improve general awareness about security issues as they pertain to different roles. Most companies have some sort of phishing training to build awareness about security because that is something that can affect anyone no matter what their role is. Many companies also have security policies in place that pertain to proper usage of computers; however, it’s equally important to have a way to inform employees about these policies. Automatic labels such as external email address headers and data classification labels are great examples of technical controls that help employees think more critically about the information involved.&lt;/p&gt;

&lt;p&gt;Building awareness is important, but successful business security culture also comes from a foundation of support from all groups involved. If you’ve worked in information security for a long time, you probably have your share of stories that include individuals or even whole departments that have a very bad perception of the security team. In many such situations, it comes from past instances where security controls have severely impacted productivity, or maybe security personnel have interacted with employees in a demeaning way which causes them to be defensive. These inconsiderate ways in which a security program is delivered can impact its effectiveness.&lt;/p&gt;

&lt;p&gt;As another example of security program effectiveness, consider phishing training. It can be very challenging in practice to be effective at improving these skills over time. Part of the reason is due to lack of engagement from the employees receiving the training. Even if it’s security personnel, there are phish-identifying rockstars out there who rarely, if ever, fail a phishing test. What sets them apart? The answer is they are probably more enthusiastic about security or take phishing training more seriously. Not everyone is inherently interested in information security, so finding creative ways to make security exciting or at least conscious in other departments is paramount in building a successful security culture.&lt;/p&gt;

&lt;p&gt;At the end of the day, the information security department must implement controls and programs that are for the good of the company whether they are completely supported or not. However, a little consideration goes a long way in improving employees’ relationships with security. There are some take-away questions from all of this you should be investigating within your organization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What security awareness programs do you currently have in place?&lt;/li&gt;
&lt;li&gt;How effective are those programs in driving reduced risk in those areas?&lt;/li&gt;
&lt;li&gt;How aware are your employees of security policies?&lt;/li&gt;
&lt;li&gt;What technical controls can you implement that would increase employees’ consideration of potentially sensitive information?&lt;/li&gt;
&lt;li&gt;What is the business’s general perception of the security team and the current controls you have in place?&lt;/li&gt;
&lt;li&gt;Do you have any controls that severely inhibit productivity?
By answering these questions, you should have identified some things you can start working to improve.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The information security program’s technical maturity&lt;br&gt;
Improving technical maturity is where things can get the busiest. As you could see from the earlier “Map of Cyber Security Domains,” there are many areas of information security that address different types of risk. No one domain is necessarily more important than another. Nevertheless, understanding your organization’s maturity in addressing risk in each of these domains is an important first step as a CISO.&lt;/p&gt;

&lt;p&gt;It can be difficult to determine where to begin improving your company’s technical information security, but there are frameworks such as the NIST Cybersecurity Framework or CIS Controls that can help guide you. Cyber security frameworks have their own pros and cons, so you should be mindful about how they are driving your changes.&lt;/p&gt;

&lt;p&gt;The biggest benefit of using a cyber security framework is that they help ensure that nothing is overlooked. They provide a holistic guide that, if followed from beginning to end, should produce a very effective information security program. Prioritization is usually built into these frameworks, which makes it easier to decide what to focus on. However, there are some pitfalls when you diligently follow a cyber security framework.&lt;/p&gt;

&lt;p&gt;The first pitfall of cyber security frameworks is the “checking boxes” mindset. There are innumerable ways to implement a security control, and they are NOT all equal in effectiveness. Many security software vendors prey on this mindset of plugging a security hole as quickly as possible for the least amount of money. &lt;strong&gt;Implementing controls without consideration for residual risk, business culture, and operational efficiency is a recipe for disaster.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The second pitfall of cyber security frameworks is treating security like a destination rather than an evolving entity. As time goes on, your business and cyber security will both change. Frameworks prioritize controls in a relatively effective manner, but they can’t account for every organization’s data protection needs that change or regress over time for various reasons. Without taking this additional context into account, you may find yourself dedicating time and resources to one control while much more cyber risk resides elsewhere. Therefore, it’s important to periodically take inventory of your business’s individual cyber risk -- to always know where your priorities need to be.&lt;/p&gt;

&lt;p&gt;Advancing your information security program’s technical maturity is where you will probably dedicate the most time as a CISO. This is because of the high volume of projects and action items and the time it takes to implement each one. The first thing you should do is develop your awareness of the data you are trying to protect and the current controls you have in place to protect it. Once you are equipped with that knowledge, you can follow a framework to help guide your well-informed game plan.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Hopefully, you have come away from this piece with a stronger sense of direction and some great new ideas. The biggest takeaway you should have is that information security isn’t just about technical controls. It takes a unified effort across the business to keep information safe. As a CISO, it’s important to be able to understand and empathize with the business side of your role while still advocating for the projects that are needed the most. Your success drives the business’s success, and vice-versa. With a strong security culture as your foundation, you can bring about a lot of positive change for your company’s information security program.&lt;/p&gt;

</description>
      <category>security</category>
      <category>career</category>
      <category>architecture</category>
      <category>leadership</category>
    </item>
  </channel>
</rss>
