<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jit - Minimum Viable Security for Developers</title>
    <description>The latest articles on DEV Community by Jit - Minimum Viable Security for Developers (@jit).</description>
    <link>https://dev.to/jit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jit"/>
    <language>en</language>
    <item>
      <title>Playing Around with AWS-Vault for Fun &amp; Profit</title>
      <dc:creator>Ohav Almog</dc:creator>
      <pubDate>Mon, 12 Jun 2023 19:49:52 +0000</pubDate>
      <link>https://dev.to/jit/playing-around-with-aws-vault-for-fun-profit-bg7</link>
      <guid>https://dev.to/jit/playing-around-with-aws-vault-for-fun-profit-bg7</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/99designs/aws-vault"&gt;AWS-Vault&lt;/a&gt; is an excellent open-source tool by 99Designs that enables developers to store AWS credentials in their machine keystore securely. After using it for a while at &lt;a href="https://jit.io"&gt;Jit&lt;/a&gt;, I decided to dig deeper into how it works and learned a lot along the way. In this article, I will summarize and simplify the information I learned to help others with their AWS-Vault adoption and lower the barrier to usage.&lt;/p&gt;

&lt;p&gt;I will start with a basic explanation of AWS access keys, the Session Token Service, and its helpful API calls. Then I will show how aws-vault uses it to provide a secure way to access AWS resources, reducing the risk of exposing AWS credentials. In addition, I will represent a typical AWS account pattern, and again we'll see how aws-vault works perfectly with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Access Keys
&lt;/h2&gt;

&lt;p&gt;AWS makes it possible to create a set of access keys for a specific user (those are called aws_access_key_id and aws_secret_access_key). Together with these keys, it's possible to authenticate to AWS and perform actions on behalf of the user.&lt;/p&gt;

&lt;p&gt;When using &lt;strong&gt;Python&lt;/strong&gt;, for example, it's possible to use the &lt;a href="https://github.com/boto/boto3"&gt;boto3&lt;/a&gt; library, which is the AWS Software Development Kit for Python, to authenticate to AWS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'us-east-1'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;aws_access_key_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;aws_secret_access_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'s3'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;list_buckets&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the &lt;a href="https://aws.amazon.com/cli/"&gt;AWS CLI&lt;/a&gt;, a configuration file can be used to store the access keys, which can then be used in the future to authenticate to AWS as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;cat&lt;/span&gt; ~/.aws/config

&lt;span class="o"&gt;[&lt;/span&gt;default]
&lt;span class="nv"&gt;region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east-1
&lt;span class="nv"&gt;aws_access_key_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;***&lt;/span&gt;
&lt;span class="nv"&gt;aws_secret_access_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;***&lt;/span&gt;

&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; aws s3 &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With such a configuration file, long-lived credentials are stored in plain text on the user's machine. That's a &lt;strong&gt;security risk&lt;/strong&gt;, as anyone with access to the device or the file can steal the credentials to perform actions on behalf of the user.&lt;/p&gt;

&lt;p&gt;That is why developers and teams have moved on to more secure authentication practices, such as using &lt;strong&gt;session tokens&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session Tokens
&lt;/h3&gt;

&lt;p&gt;Sessions are a more secure approach, allowing the user to create valid &lt;strong&gt;temporary credentials&lt;/strong&gt; for a specific period. Thus, if stolen, the keys might already be expired or will only limit the time the attacker can leverage them.&lt;/p&gt;

&lt;p&gt;Below is a code example for how to generate temporary credentials using the AWS CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; aws sts get-session-token
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Credentials"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"AccessKeyId"&lt;/span&gt;: &lt;span class="s2"&gt;"***"&lt;/span&gt;,
        &lt;span class="s2"&gt;"SecretAccessKey"&lt;/span&gt;: &lt;span class="s2"&gt;"***"&lt;/span&gt;,
        &lt;span class="s2"&gt;"SessionToken"&lt;/span&gt;: &lt;span class="s2"&gt;"***"&lt;/span&gt;,
        ...
        &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But &lt;strong&gt;hang on a minute&lt;/strong&gt;, as this example uses the long-lived credentials to authenticate to STS and get the temporary credentials. So, how could this possibly solve the problem of having unencrypted long-lived credentials on a user's machine?&lt;/p&gt;

&lt;p&gt;That is where aws-vault comes in. But just before diving into aws-vault, let's first look at popular STS API calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS STS
&lt;/h2&gt;

&lt;p&gt;STS, which stands for AWS Security Token Service, is a web service that enables you to request temporary, limited-privilege credentials for AWS IAM users or IAM roles. All temporary credentials consist of an access key ID, a secret access key, and a session token. These three credentials are then used to sign requests to any AWS service.&lt;/p&gt;

&lt;h3&gt;
  
  
  GetSessionToken
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html"&gt;GetSessionToken&lt;/a&gt; is an API call that fetches temporary credentials for an &lt;strong&gt;IAM user&lt;/strong&gt;. The simplest example is when a user uses its long-lived credentials to request temporary credentials (for any reason), where a widespread use case for this API call is when a user wants to authenticate to AWS using MFA. If MFA is used, the received credentials can be used to access APIs requiring MFA authentication.&lt;/p&gt;

&lt;h3&gt;
  
  
  AssumeRole
&lt;/h3&gt;

&lt;p&gt;Another method used to get temporary credentials, this time for an &lt;strong&gt;IAM role&lt;/strong&gt;, is &lt;a href="https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html"&gt;AssumeRole&lt;/a&gt;. Like GetSessionToken, it returns an access key, a secret access key, and a session token. It's possible to authenticate with an MFA device when assuming a role, and moreover, it's possible to protect an IAM role, requiring it to be assumed with MFA.&lt;/p&gt;

&lt;h3&gt;
  
  
  GetSessionToken and AssumeRole Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;GetSessionToken&lt;/th&gt;
&lt;th&gt;AssumeRole&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Used to receive credentials for&lt;/td&gt;
&lt;td&gt;IAM User&lt;/td&gt;
&lt;td&gt;IAM Role&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Can be called with only long-lived credentials&lt;/td&gt;
&lt;td&gt;YES&lt;/td&gt;
&lt;td&gt;NO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Can be called with MFA&lt;/td&gt;
&lt;td&gt;YES&lt;/td&gt;
&lt;td&gt;YES&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;The received credentials can be used to issue a call to &lt;code&gt;GetSessionToken&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;NO&lt;/td&gt;
&lt;td&gt;NO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;The received credentials can be used to issue a call to &lt;code&gt;AssumeRole&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;YES&lt;/td&gt;
&lt;td&gt;YES&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maximum session duration&lt;/td&gt;
&lt;td&gt;36 hours&lt;/td&gt;
&lt;td&gt;12 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cross-account access&lt;/td&gt;
&lt;td&gt;NO&lt;/td&gt;
&lt;td&gt;YES&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h1&gt;
  
  
  AWS Vault
&lt;/h1&gt;

&lt;p&gt;Now let's see where AWS Vault comes in to add a layer of security, preventing the storage of long-lived credentials in regular system files.&lt;/p&gt;

&lt;p&gt;Copied from the project’s README:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“AWS Vault stores IAM credentials in your operating system's secure keystore and then generates temporary credentials from those to expose to your shell and applications.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What this means for the end user is that instead of storing the credentials as clear text in a configuration file, they are stored in the OS's secure keystore. That is a safer approach, as the credentials are &lt;strong&gt;encrypted&lt;/strong&gt; and can only be decrypted by the user who created them.&lt;/p&gt;

&lt;p&gt;Not only does AWS-Vault store the credentials in the OS's secure keystore, but it also generates &lt;strong&gt;temporary credentials&lt;/strong&gt; from those to expose to the shell and applications. That means that even if the credentials are stolen from the environment or an application, they are only valid for a specific period.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Work?
&lt;/h2&gt;

&lt;p&gt;First, aws-vault stores the long-lived credentials in the OS's secure keystore. To do it, the &lt;code&gt;aws-vault add&lt;/code&gt; command should be used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# A configured profile can already exist or not. If not, aws-vault will create the profile in `~/.aws/config`.&lt;/span&gt;

&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; aws-vault add some-developer
Enter Access Key Id: &lt;span class="k"&gt;***&lt;/span&gt;
Enter Secret Access Key: &lt;span class="k"&gt;***&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For instance, in macOS, the credentials are stored encrypted in the Keychain Access app and can be viewed under 'Custom Keychains'.&lt;/p&gt;

&lt;p&gt;Then, when requesting to use that profile, aws-vault will generate temporary credentials from the long-lived credentials using GetSessionToken. These temporary credentials are then exposed to the shell and applications using environment variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; aws-vault &lt;span class="nb"&gt;exec &lt;/span&gt;some-developer &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;env&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;AWS
&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;***&lt;/span&gt;
&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;***&lt;/span&gt;
&lt;span class="nv"&gt;AWS_SESSION_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;***&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS-Vault also stores the session credentials in the same keychain to be used until they expire.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.jit.io/blog/playing-around-with-aws-vault-for-fun-profit"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qie5S1js--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ns4gknoiqrx2y7buu7u6.png" alt="Image description" width="783" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Popular AWS Account Pattern
&lt;/h2&gt;

&lt;p&gt;A typical user management pattern and recommended security practice is having a single AWS &lt;strong&gt;management account&lt;/strong&gt; with only IAM users grouped into IAM groups (like developers and admins). Then, in addition to that account, there can be multiple AWS accounts (tagged dev, staging, and prod) with IAM roles that are assumed by the users in the developers and admins groups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.jit.io/blog/playing-around-with-aws-vault-for-fun-profit"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mVsVRN3w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2kkuaj035sgn89rwl71.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This pattern has many advantages, and the biggest of them all is &lt;strong&gt;resource isolation&lt;/strong&gt;. Having each environment resource in a different AWS account makes it very unlikely that resources from a dev account will interact, interfere, or be included in the same list of resources in the production environment. In addition, in terms of security, this separation makes it harder to access prod resources or data (which are usually the most valuable) from a development or a staging account.&lt;/p&gt;

&lt;p&gt;Not only that but separating accounts also helps by viewing and monitoring cost and usage. With that, setting specific alerts and budgets for each environment is very easy.&lt;/p&gt;

&lt;p&gt;A developer in such an organization will probably have an AWS configuration file that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[default]
region = us-east-1
mfa_serial = arn:aws:iam::1000:mfa/some-developer

[profile some-developer]
Credential_process = aws-vault export --format=json some-developer

[profile dev]
role_arn = arn:aws:iam::2000:role/Admin
source_profile = some-developer

[profile staging]
role_arn = arn:aws:iam::3000:role/ReadOnly
source_profile = some-developer

[profile prod]
role_arn = arn:aws:iam::4000:role/ReadOnly
source_profile = some-developer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where the beauty of aws-vault, together with this pattern, comes in. At first glance, it's possible to think that AWS-Vault would ask for a session token for each environment (as it's in a different profile and AWS account). But, if you remember the previous section, GetSessionToken requests credentials for an IAM user, not for an IAM role. And in this case, the developer only has one user, which exists in the management account.&lt;/p&gt;

&lt;p&gt;Thus, the developer will store only a single pair of long-lived keys of the management account in the OS’ secure keystore, as can be seen in the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; aws-vault list

Profile                  Credentials              Sessions                      
&lt;span class="o"&gt;=======&lt;/span&gt;                  &lt;span class="o"&gt;===========&lt;/span&gt;              &lt;span class="o"&gt;========&lt;/span&gt;                      
default                  -                        -                             
some-developer           some-developer           sts.GetSessionToken:8h29m31s  
dev                      -                        -                             
staging                  -                        -                             
prod                     -                        -          
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS-Vault identifies the &lt;code&gt;source_profile&lt;/code&gt; and &lt;code&gt;role_arn&lt;/code&gt; attributes in the config file and understands it needs to assume each environment role with the same session from the management account's developer profile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.jit.io/blog/playing-around-with-aws-vault-for-fun-profit"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xn7JV5yz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3s2bzrnblk5dhtyk2uer.png" alt="Image description" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By leveraging the &lt;code&gt;source_profile&lt;/code&gt; property in the &lt;code&gt;~/.aws/config&lt;/code&gt; file, aws-vault will use the credentials from the &lt;code&gt;source_profile&lt;/code&gt; to assume the role in the &lt;code&gt;role_arn&lt;/code&gt; property. That allows aws-vault to ask for MFA only once and then use the temporary credentials to assume the roles in the other accounts.&lt;/p&gt;

&lt;p&gt;Usually, these roles will have a condition requiring MFA to be assumed. So, why isn't the MFA being asked for each time the developer switches between environments? That's thanks to the fact that a session token that was created using MFA can be used to assume a role that requires MFA, and aws-vault is &lt;strong&gt;smart enough&lt;/strong&gt; to know that &lt;strong&gt;both roles use the same MFA device&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With this pattern and configuration, developers can easily switch between different environments without a hassle:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; aws-vault &lt;span class="nb"&gt;exec &lt;/span&gt;dev &lt;span class="nt"&gt;--&lt;/span&gt; aws lambda invoke &lt;span class="nt"&gt;--function-name&lt;/span&gt; some-function &lt;span class="nt"&gt;--payload&lt;/span&gt; &lt;span class="s1"&gt;'{}'&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; aws-vault &lt;span class="nb"&gt;exec &lt;/span&gt;staging &lt;span class="nt"&gt;--&lt;/span&gt; aws lambda list-functions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Wrap Up
&lt;/h1&gt;

&lt;p&gt;AWS-Vault is a powerful &lt;a href="https://www.jit.io/security-tools"&gt;open source security tool&lt;/a&gt; to help add a layer of much-needed essential security for AWS users and developers. It is also built on top of the classic AWS config file, helping to avoid extra configuration and pains of usage.&lt;/p&gt;

&lt;p&gt;I hope the examples provided will help you ramp up aws-vault more quickly and unleash the security capabilities this tool offers AWS users.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>opensource</category>
      <category>security</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AssumeRoleWithWebIdentity WHAT?! Solving the Github to AWS OIDC InvalidIdentityToken Failure Loop</title>
      <dc:creator>Ariel beck</dc:creator>
      <pubDate>Wed, 18 Jan 2023 10:07:16 +0000</pubDate>
      <link>https://dev.to/jit/assumerolewithwebidentity-what-solving-the-github-to-aws-oidc-invalididentitytoken-failure-loop-11aj</link>
      <guid>https://dev.to/jit/assumerolewithwebidentity-what-solving-the-github-to-aws-oidc-invalididentitytoken-failure-loop-11aj</guid>
      <description>&lt;p&gt;How many of us have encountered all kinds of &lt;code&gt;CrashLoopBackoff&lt;/code&gt; or other random error messages, and start to go down the Stack Overflow rabbit hole, only to hit a wall? &lt;/p&gt;

&lt;p&gt;‍In our case this occurred with the &lt;code&gt;AssumeRoleWithWebIdentity&lt;/code&gt; that started throwing the &lt;code&gt;InvalidIdentityToken&lt;/code&gt; error when running pipelines with an OIDC provider for AWS. We went through a whole process of researching and ultimately fixing the issue for good, and decided to give a quick runthrough  in a single post of how you can do this too.&lt;/p&gt;

&lt;p&gt;‍‍&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h0hw2zwihv0n93wgyz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h0hw2zwihv0n93wgyz4.png" alt="Image description" width="512" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Error received from Terraform that flagged the issue.&lt;br&gt;
‍&lt;br&gt;
So let’s take a step back and provide some context to the issue at hand, when you’re likely to encounter it, and ways to overcome it. &lt;/p&gt;

&lt;p&gt;‍## Web Applications and AWS Authentication through the OIDC Plugin&lt;br&gt;
When companies work with AWS with third-party resources, they need to create a “trust relationship” between AWS and the service to ensure the resource has the necessary permissions to access the AWS account. To do so, the OpenID Connect (&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html" rel="noopener noreferrer"&gt;OIDC&lt;/a&gt;) plugin, a simple identity layer on top of the OAuth 2.0 protocol, is a commonly chosen method for doing so. &lt;/p&gt;

&lt;p&gt;‍For us, the service at hand was Github, where the &lt;a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services" rel="noopener noreferrer"&gt;OIDC authentication&lt;/a&gt; was configured in Github, to provide our Github repo the required trust relationship to access our AWS account with the specified permissions through temporary credentials, in order to create and deploy AWS resources through our CI/CD process.  We basically had a primary environment variable configured ("AWS_WEB_IDENTITY_TOKEN_FILE"), which then tells various tools such as boto3, the AWS CLI or Terraform (based on the relevant pipeline) to perform the &lt;code&gt;AssumeRoleWithWebIdentity&lt;/code&gt; and get the designated temporary credentials for the role to perform AWS operations. &lt;/p&gt;
&lt;h2&gt;
  
  
  Random Failures with Esoteric Error Messages
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;AssumeRoleWithWebIdentity&lt;/code&gt; error manifests itself mostly around parallel access attempts, and how the various AWS interfaces are able to authenticate, as well as run and deploy services.  We started encountering this issue when running our pipelines for deployment, and attempting to authenticate our Github account to AWS via the OIDC plugin. This is a well-known (and &lt;a href="https://github.com/aws-actions/configure-aws-credentials/issues/299" rel="noopener noreferrer"&gt;widely discussed&lt;/a&gt;) limitation for authentication to AWS for web application providers. In our case it was Github, but this is true for pretty much any web application integration.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
At first this error would randomly fail builds, with the InvalidIdentityToken error, which would only sometimes succeed on reruns.  At first we ignored it, and just assumed it was the regular old run of the mill technology failures at scale.  But then this started to happen more frequently, and added sufficient friction to our engineering velocity and delivery, that we had to uncover what was happening.&lt;/p&gt;

&lt;p&gt;‍So what was happening under the hood?&lt;br&gt;
‍&lt;br&gt;
Our design consisted of a method to retrieve the token from AWS, where we then used the token in order to try to connect to AWS.  Where this fails is with the threshold that AWS enables for multiple parallel authentication requests, along with the very low Github timeout for response.  The tool chain we were using essentially tried to access this token simultaneously many times, and this caused the authentication access to fail upon throttling of the resources. &lt;/p&gt;
&lt;h2&gt;
  
  
  Fixing the InvalidIdentityToken error with the AssumeRoleWithWebIdentity Method
&lt;/h2&gt;

&lt;p&gt;Overcoming this issue was a concerted effort, which began with even understanding what triggers this issue to begin with and how to get to the root cause, and then solving the issue ––particularly since when authenticating to AWS there is usually a canonical order, for example, first the environment variables, to ensure the environment runs as it should with access to the relevant keys and private information.&lt;/p&gt;

&lt;p&gt;‍We found that the current method we were using of a temporary token from the provider for direct authentication access, was clunky.  There is no way to use this method without creating a system of retries, which is never robust enough when it’s out of your control, and not all of our tools supported, either.  &lt;/p&gt;

&lt;p&gt;Therefore, we understood a more stable method was required, we would need a way to control the token calls for authentication.  By doing so, we would be able to ensure retries until success, so that failures due to inherent limitations can be avoided or at the very least retried until successful.&lt;/p&gt;

&lt;p&gt;First and foremost, let’s talk about our tool chain.  Not all the tools we were using (boto3, Terraform), have the logic or capability for retries, and this where we were encountering these failures.  With multiple pipelines running simultaneously, all using the temporary access token method for authentication, we quickly reached the threshold of authentication and access.&lt;/p&gt;

&lt;p&gt;In order to enable parallel access we realized we were missing a critical step in the process to make this possible.  The actual recommended order  to make this possible would be to retry the &lt;code&gt;AssumeRoleWithWebIdentity&lt;/code&gt; until success, and then set the environment variables upon successful access (based on the &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials" rel="noopener noreferrer"&gt;docs&lt;/a&gt; these include : AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY + AWS_SESSION_TOKEN). Another equally critical piece to make this all work was to provide a longer validity window for our token, in our case we set the expiration to 1 hour, and the last and most important part, by performing the retry ourselves.  &lt;/p&gt;

&lt;p&gt;Once we understood the process we needed to make parallel access work, we started to do some research, and happened upon a &lt;a href="https://github.com/aws-actions/configure-aws-credentials" rel="noopener noreferrer"&gt;Github Action&lt;/a&gt; that basically does this entire process end-to-end for us, all the way through configuring the environment variables under the hood for the temporary credentials. The added bonus is that this Github Action prevents Terraform from assuming the role, and simply uses the credentials that were set (that are now valid for 1 hour). Another advantage of using this Github Action is that it also addressed a known exponential backoff bug that was &lt;a href="https://github.com/aws-actions/configure-aws-credentials/pull/350" rel="noopener noreferrer"&gt;fixed in January&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By setting these environment variables that are valid for up to an hour, this adds stability to the system, as the tokens are now AWS tokens and not unstable OIDC tokens, basically bypassing the need to assume a role directly for authentication vs. the provider.&lt;/p&gt;

&lt;p&gt;‍Similar logic can basically be applied to any application that uses the OIDC plugin, that encounters such issues, even those that we couldn’t leverage the excellent gem of a Github Action for. We borrowed from this, and wrote our own code that replicates similar logic for non-Github applications to achieve similar stability. See the code example below:&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ROLE_ARN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arn:aws:iam::1234567890:role/RoleToAssume
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_WEB_IDENTITY_TOKEN_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/awscreds
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_DEFAULT_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;region&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DEFAULT_PARALLEL_JOBS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4

&lt;span class="nv"&gt;OUTPUT_TOKEN_REQUEST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: bearer &lt;/span&gt;&lt;span class="nv"&gt;$ACTIONS_ID_TOKEN_REQUEST_TOKEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ACTIONS_ID_TOKEN_REQUEST_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$OUTPUT_TOKEN_REQUEST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;  | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.value'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /tmp/awscreds

&lt;span class="nv"&gt;RET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;MAX_RETRIES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
&lt;span class="nv"&gt;COUNTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;WAIT_FACTOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
&lt;span class="nv"&gt;RUN_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;uuidgen&lt;span class="si"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;until&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RET&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-eq&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;

    &lt;span class="c"&gt;# 5 retries are enough, then fail.&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$COUNTER&lt;/span&gt; &lt;span class="nt"&gt;-gt&lt;/span&gt; &lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RUN_ID&lt;/span&gt;&lt;span class="s2"&gt; - Maximum retries of &lt;/span&gt;&lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt;&lt;span class="s2"&gt; reached. Returning error."&lt;/span&gt;
      &lt;span class="nb"&gt;exit &lt;/span&gt;1
    &lt;span class="k"&gt;fi&lt;/span&gt;

    &lt;span class="c"&gt;# Try to perform the assume role with web identity&lt;/span&gt;
    &lt;span class="nv"&gt;OUTPUT_ASSUME_ROLE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws sts assume-role-with-web-identity &lt;span class="nt"&gt;--duration-seconds&lt;/span&gt; 3600 &lt;span class="nt"&gt;--role-session-name&lt;/span&gt; my_role_name &lt;span class="nt"&gt;--role-arn&lt;/span&gt; &lt;span class="nv"&gt;$AWS_ROLE_ARN&lt;/span&gt; &lt;span class="nt"&gt;--web-identity-token&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/awscreds&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="nv"&gt;$AWS_DEFAULT_REGION&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nv"&gt;RET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$?&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RUN_ID&lt;/span&gt;&lt;span class="s2"&gt; - attempt: &lt;/span&gt;&lt;span class="nv"&gt;$COUNTER&lt;/span&gt;&lt;span class="s2"&gt;, assume rule returned code: &lt;/span&gt;&lt;span class="nv"&gt;$RET&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$RET&lt;/span&gt; &lt;span class="nt"&gt;-ne&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RUN_ID&lt;/span&gt;&lt;span class="s2"&gt; - attempt: &lt;/span&gt;&lt;span class="nv"&gt;$COUNTER&lt;/span&gt;&lt;span class="s2"&gt; - Error happened in assume role, error code -  &lt;/span&gt;&lt;span class="nv"&gt;$RET&lt;/span&gt;&lt;span class="s2"&gt;, error msg: &lt;/span&gt;&lt;span class="nv"&gt;$OUTPUT_ASSUME_ROLE&lt;/span&gt;&lt;span class="s2"&gt;. retrying..."&lt;/span&gt;

      &lt;span class="nv"&gt;WAIT_FACTOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;WAIT_FACTOR&lt;span class="o"&gt;*&lt;/span&gt;COUNTER&lt;span class="k"&gt;))&lt;/span&gt;
      &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="nv"&gt;$WAIT_FACTOR&lt;/span&gt;
    &lt;span class="k"&gt;else
      &lt;/span&gt;&lt;span class="nv"&gt;access_key_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$OUTPUT_ASSUME_ROLE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.Credentials.AccessKeyId'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="c"&gt;# Set the AWS environment variables to be used.&lt;/span&gt;
      &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$access_key_id&lt;/span&gt;
      &lt;span class="nv"&gt;secret_access_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$OUTPUT_ASSUME_ROLE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.Credentials.SecretAccessKey'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$secret_access_key&lt;/span&gt;
      &lt;span class="nv"&gt;session_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$OUTPUT_ASSUME_ROLE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.Credentials.SessionToken'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SESSION_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$session_token&lt;/span&gt;
    &lt;span class="k"&gt;fi
    &lt;/span&gt;&lt;span class="nv"&gt;COUNTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;COUNTER+1&lt;span class="k"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;

&lt;span class="c"&gt;# Perform any calls to AWS now - the 3 environment variables will take precedence over AWS_WEB_IDENTITY_TOKEN_FILE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To wrap up, what we quickly understood is that even with a small feature change from single to multi-region, that suddenly required parallel access, a process that worked perfectly fine for sequential access, wasn’t the best fit when we started to grow. This naive design broke down at scale and required us to unpack the complex logic happening under the hood.  &lt;/p&gt;

&lt;p&gt;When &lt;a href="https://www.jit.io/blog/assumerolewithwebidentity-what-solving-the-github-to-aws-oidc-invalididentitytoken-failure-loop" rel="noopener noreferrer"&gt;we encountered&lt;/a&gt; this issue we were searching for a tutorial just like this one to help us resolve the issue quickly, so we hope you found this useful and can save some of the time and effort it took us to resolve the issue  &lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
    <item>
      <title>NPM Audit: 5 Ways to Use it to Protect Your Code</title>
      <dc:creator>Avichay Attlan</dc:creator>
      <pubDate>Wed, 04 Jan 2023 12:53:51 +0000</pubDate>
      <link>https://dev.to/jit/npm-audit-5-ways-to-use-it-to-protect-your-code-4nnp</link>
      <guid>https://dev.to/jit/npm-audit-5-ways-to-use-it-to-protect-your-code-4nnp</guid>
      <description>&lt;p&gt;You might already know Node and the accompanying JS package manager - NPM. NPM is the most extensive package manager in the world, with over one million packages available. Since the packages and dependency trees are updated frequently, vulnerabilities from old package versions may find their way into your project.&lt;/p&gt;

&lt;p&gt;‍If you have not touched your project in a while and find that you have far more vulnerabilities than expected, you’ll need a more comprehensive tool for dealing with your entire node module folder in one fell swoop. Node’s tool for the job is the NPM Audit. In this article, we’ll dive deeper into the various options in NPM Audit and how you can utilize them to protect your code.&lt;/p&gt;

&lt;h2&gt;
  
  
  NPM Advisory Database
&lt;/h2&gt;

&lt;p&gt;The npm install and npm audit commands check for vulnerabilities against known security risks reported in the public npm registry. As of late 2021, this vulnerability database has been hosted on GitHub, called the &lt;a href="https://docs.github.com/en/code-security/security-advisories/global-security-advisories/browsing-security-advisories-in-the-github-advisory-database"&gt;GitHub Advisory Database&lt;/a&gt;. The same vulnerability database powers &lt;a href="https://lightrun.com/github-dependabot-tips/"&gt;GitHub’s Dependabot&lt;/a&gt; tool alerting developers to known &lt;a href="https://www.jit.io/blog/the-in-depth-guide-to-owasps-top-10-vulnerabilities"&gt;vulnerabilities&lt;/a&gt; in their code base hosted on GitHub.&lt;/p&gt;

&lt;p&gt;The npm audit command now includes a URL with each proposed vulnerability fix linking to the GitHub Advisory Database’s specific vulnerability report. If you’re interested, GitHub also &lt;a href="https://docs.github.com/en/code-security/security-advisories/global-security-advisories/browsing-security-advisories-in-the-github-advisory-database"&gt;provides an API&lt;/a&gt; for browsing the Advisory Database for vulnerabilities based on severity or a particular package name. You can also offer suggestions for fixing vulnerabilities or editing a specific vulnerability description to clarify it. &lt;/p&gt;

&lt;p&gt;Here’s an example of a vulnerability description on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u54Md8DB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m17x5eje2me25zco9lx4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u54Md8DB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m17x5eje2me25zco9lx4.png" alt="NPM vulnerability" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to run NPM Audit
&lt;/h2&gt;

&lt;p&gt;The basic command you need to run to get Node’s suggestion on fixing your vulnerabilities is npm audit. Firstly, ensure you have installed the latest Node and NPM versions and open your project.&lt;/p&gt;

&lt;p&gt;You should navigate to the project’s folder where your package.json file is saved. Once you have your project open, open a new terminal where you can type any of the mentioned CLI commands and where you’ll see the results. &lt;/p&gt;

&lt;p&gt;The npm audit command will give you a list of the &lt;a href="https://spectralops.io/blog/best-npm-vulnerability-scanners/"&gt;vulnerabilities found&lt;/a&gt; and more information about them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tpEVp3vh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dyjurk9083twm9y0gmxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tpEVp3vh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dyjurk9083twm9y0gmxx.png" alt="Image description" width="572" height="228"&gt;&lt;/a&gt;&lt;br&gt;
At the end of the report, you’ll see the number of vulnerabilities that NPM Audit can fix automatically and the vulnerabilities that require a manual review.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mC9tSTfj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ofek6vdb9hh6hxv1zkw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mC9tSTfj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ofek6vdb9hh6hxv1zkw4.png" alt="Image description" width="688" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the report is particularly long, you can use &lt;code&gt;npm audit --json&lt;/code&gt; to get the report in a JSON format. To pipe the report to a file, use the &amp;gt; (pipe) sign along with the path and filename you wish to generate: &lt;code&gt;npm audit --json &amp;gt; report.json&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;You can also use a package called &lt;a href="https://www.npmjs.com/package/npm-audit-html"&gt;npm-audit-html&lt;/a&gt; to generate the same report in an HTML format (&lt;code&gt;npm audit --json | npm-audit-html --output report.html&lt;/code&gt;). JSON and HTML formats are more straightforward to view than a simple data dump to the terminal.&lt;/p&gt;

&lt;p&gt;‍Jit orchestrates security for Node.js stacks. Your developers can quickly and seamlessly integrate &lt;a href="https://www.jit.io/security-tools/npm-audit"&gt;npm-audit&lt;/a&gt; into their code security layer to help run dependency checks within a centralized CI workflow. From Code, pipeline, and infrastructure to runtime, Jit provides security-plan-as-code (SaC) and orchestrates all &lt;a href="https://www.jit.io/security-tools"&gt;security tools&lt;/a&gt; at every stage of the software development lifecycle. &lt;/p&gt;

&lt;h2&gt;
  
  
  5 Ways to Use NPM Audit to Protect Your Code
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Generate security audit reports frequently
&lt;/h3&gt;

&lt;p&gt;Since the NPM package ecosystem is updated often, you may already have vulnerabilities in your code due to old package versions you may not even be aware of. Most of us don’t run a full npm install frequently in the regular course of events.&lt;/p&gt;

&lt;p&gt;You should run the npm audit report regularly, as this will help you ensure you don’t have hidden dependency vulnerabilities in your project. What you consider a 'regular basis' depends on your project's size and complexity.&lt;/p&gt;

&lt;p&gt;About once a month may be enough for a large and complex project. If the project is stable and doesn't have a lot of updates, then even once a quarter may suffice. The frequency is up to you, but you can keep those reports as testimony that you are on top of the project’s vulnerability issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Review the report and security advisory
&lt;/h3&gt;

&lt;p&gt;An NPM Audit report contains the following data in the following structure:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Severity&lt;/strong&gt; - Description&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Package (title)&lt;/strong&gt; - relevant info&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patched In (title)&lt;/strong&gt; - relevant info&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency Of (title)&lt;/strong&gt; - relevant info&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Path (title)&lt;/strong&gt; - relevant info&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More Info (title)&lt;/strong&gt; - URL link to GitHub Advisory DB&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1CG5tHOF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6okc2nsuxnrk4wjyyw1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1CG5tHOF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6okc2nsuxnrk4wjyyw1s.png" alt="Image description" width="551" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s a brief description of what each piece of information means:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Severity&lt;/strong&gt; - The severity of the vulnerability based on its potential impact. The severity is divided into &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Critical - Address immediately&lt;/li&gt;
&lt;li&gt;High - Address as quickly as possible&lt;/li&gt;
&lt;li&gt;Moderate - Address when time allows&lt;/li&gt;
&lt;li&gt;Low - Address at your discretion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt; - A short description of what might happen if you don’t address the vulnerability. For example - ‘Vulnerable to DoS attack.’&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Package&lt;/strong&gt; - The name of the package where the vulnerability was found.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patched in&lt;/strong&gt; - Assuming the vulnerability was addressed in a later version of the package, this part will say which version contains the patched version. For example &lt;code&gt;&amp;gt;=2.0.1&lt;/code&gt; (in all versions starting at 2.0.1 and higher)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency of&lt;/strong&gt; - Which other package (or packages) uses this particular package? You might see a lot of packages in the report you have no recollection of ever installing or using. That’s because each of the packages you use comes with its dependencies, and sometimes those dependencies have dependencies of their own in a very long chain called the software supply chain. Telling you which package is using this problematic package is vital since removing the original package eliminates the dependency problem without updating anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Path&lt;/strong&gt; - The path to the package folder in the node modules containing the vulnerability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More info&lt;/strong&gt; - A link to the security report in the GitHub Advisory Database.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Verify the registry signatures of downloaded packages
&lt;/h3&gt;

&lt;p&gt;Packages published to the public npm registry are signed to make it possible to detect if the package content has been tampered with. The signing happens automatically once the developer uploads the package to NPM. &lt;/p&gt;

&lt;p&gt;Should a proxy server, a mirror, or a similar attack affect the users of a particular package, the signature found on the local package will not match the expected signature saved in the NPM registry for that package. &lt;/p&gt;

&lt;p&gt;‍By adding the tag Signatures to the npm audit command (npm audit signatures), you’ll get a report explicitly checking each of your packages’ signatures. This tag only works starting at npm v8.15.0, so make sure you have the latest NPM version if you want to use this option. Note that the output might vary from version to version. &lt;/p&gt;

&lt;p&gt;Here’s an example of the final report you might receive:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xskts8bk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esk7z0yi94xyr7tg4u7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xskts8bk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esk7z0yi94xyr7tg4u7c.png" alt="Image description" width="700" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each package, you’ll receive a list of its &lt;code&gt;keyid&lt;/code&gt; that has to match one of the public signing keys and the actual signature based on that key. For example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0NQuxrY_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vqbu94cfh3pcbl0e0k0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0NQuxrY_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vqbu94cfh3pcbl0e0k0.png" alt="Image description" width="574" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check if the &lt;code&gt;keyid&lt;/code&gt; matches one of the public keys, you can go to &lt;code&gt;registry-host.tld/-/npm/v1/keys&lt;/code&gt; and compare the keys provided there. Make sure you compare based on the same key format. In this case, for example, you’ll need to check the SHA256 key.&lt;/p&gt;

&lt;p&gt;Pay close attention to any packages with problematic signatures - it might indicate that your version has been tampered with.&lt;/p&gt;

&lt;p&gt;‍Since there could be thousands of packages, you can pipe the results to a JSON file and only check packages where there is an indication that the signatures don't match. If you’re worried, an easy fix would be to re-install a problematic package and check again. You can contact NPM directly and report the problem if you still get the same issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Check for meta-vulnerabilities and remediations
&lt;/h3&gt;

&lt;p&gt;A "meta-vulnerability" is a vulnerable dependency since the package depends on a vulnerable version of a different package. So, package ‘A’ might not have any vulnerability in itself. Still, it will be displayed as containing a vulnerability since it depends on package ‘B,’, which has a known vulnerability. &lt;/p&gt;

&lt;p&gt;‍Once meta-vulnerabilities for a given package are calculated, they are cached in the &lt;code&gt;~/.npm&lt;/code&gt; folder and only re-evaluated if the advisory range changes or a new version of the package is published (in which case, the latest version is checked for meta-vulnerable status as well).&lt;/p&gt;

&lt;p&gt;Suppose the chain of meta-vulnerabilities extends to the root project, and it’s impossible to update without changing its dependency ranges. In that case, the npm audit fix will require the &lt;code&gt;--force&lt;/code&gt; option to apply the remediation. &lt;/p&gt;

&lt;p&gt;Remediations may not require changes to the dependency ranges. In this case, all vulnerable packages will be updated to a version that does not have an advisory or meta-vulnerability posted against it without any need for developer interference.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Fixing vulnerabilities automatically
&lt;/h3&gt;

&lt;p&gt;Assuming you have more than one or two vulnerabilities, this report can quickly get tedious. If you trust Node’s suggestions, you can run an npm audit fix and fix whatever can be fixed automatically. Since the npm audit fix is essentially running npm install under the hood, be prepared to wait a short while for it to be completed. &lt;/p&gt;

&lt;p&gt;Once the fix command has concluded, you’ll get a summary of the completed changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pC8muWV4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xasrtqo4y4rzabi95nre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pC8muWV4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xasrtqo4y4rzabi95nre.png" alt="Image description" width="827" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This report will tell you if there are any potential breaking changes you should review. You can also rerun the command with --force to make the update and deal with the broken code later.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Fixing vulnerabilities manually
&lt;/h3&gt;

&lt;p&gt;Based on the audit report you have received, you have two options for dealing with changes that cannot be fixed automatically with an npm audit:&lt;/p&gt;

&lt;p&gt;Forcing the audit fix to make the necessary changes, even if they might break your code.&lt;br&gt;
Go over each suggested fix and problem and decide how to deal with those yourself.&lt;br&gt;
In the example we showed earlier, there are potentially 18 packages that require manual review and two packages that involve breaking changes. &lt;/p&gt;

&lt;p&gt;Once you run the npm audit fix command, rerunning the report will give you only the problematic packages. Here, you need to look at the suggested remediation (assuming there is one) and see what else might be affected if you apply that fix. &lt;/p&gt;

&lt;p&gt;In some of these cases, it might be easier to replace a package with a different one with similar usability rather than tracking a vulnerability down the dependency tree. That’s assuming the vulnerability is tagged as high or critical severity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting Auditing Now
&lt;/h2&gt;

&lt;p&gt;None of us want to face disgruntled users after our entire project fell apart due to a problem that wasn’t fixed on time. That’s why you need to audit your dependency vulnerabilities often.&lt;/p&gt;

&lt;p&gt;Since most projects involve a team of developers, it’s much more efficient to audit your code after each PR. This is where CI/CD integrated tools for dependency audit come in handy. &lt;/p&gt;

</description>
      <category>npm</category>
      <category>security</category>
      <category>code</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>How to Automate OWASP ZAP</title>
      <dc:creator>Simon Bennetts</dc:creator>
      <pubDate>Wed, 14 Sep 2022 14:52:47 +0000</pubDate>
      <link>https://dev.to/jit/how-to-automate-owasp-zap-2d75</link>
      <guid>https://dev.to/jit/how-to-automate-owasp-zap-2d75</guid>
      <description>&lt;h2&gt;
  
  
  Introducing ZAP
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.zaproxy.org/"&gt;OWASP ZAP&lt;/a&gt; is the world’s most popular web app scanner that now sees over 4 Million “&lt;a href="https://www.zaproxy.org/docs/statistics/bar-charts/"&gt;Check for Updates&lt;/a&gt;” calls per month (up from 1 million just earlier this year). &lt;/p&gt;

&lt;p&gt;It is free, open source, and used by people with a wide range of security experience, ranging from newcomers right up to experienced security professionals to get a better understanding of web application security posture. The way &lt;a href="https://jit.io/zap"&gt;OWASP ZAP&lt;/a&gt; works is by attacking your web apps in a similar way to a malicious hacker, where it attacks your apps when they are running and shows you what attackers will be able to find when they attack your app.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ZAP Desktop
&lt;/h2&gt;

&lt;p&gt;ZAP was built for flexibility and adoption, and therefore it is possible to run it in a diversity of ways aligned with how you like to work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From the command line&lt;/li&gt;
&lt;li&gt;As a desktop application&lt;/li&gt;
&lt;li&gt;As a background daemon&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this blog post I’ll show you how to automate ZAP from the command line, and to set it up by using the ZAP desktop application. The ZAP desktop app allows you to see exactly what ZAP does, and  to tune ZAP to handle your app as effectively as possible.&lt;/p&gt;

&lt;p&gt;ZAP requires &lt;a href="https://www.zaproxy.org/download/"&gt;Java 8+&lt;/a&gt; to run locally, but you can also run &lt;a href="https://www.zaproxy.org/docs/docker/"&gt;ZAP in Docker&lt;/a&gt; and access it via a browser using &lt;a href="https://www.zaproxy.org/docs/docker/webswing/"&gt;Webswing&lt;/a&gt;.  We won’t dive into this setup here but you are welcome to reference the docs to get started.&lt;/p&gt;

&lt;p&gt;Just note, if you use the docker option then you will need to make sure you start docker with the option to map a local drive so that you can access the file you are going to generate after you stop the docker image. On *nix systems the docker option is &lt;code&gt;-v $(pwd):/zap/wrk/:rw&lt;/code&gt; - on Windows you will need to replace &lt;code&gt;$(pwd)&lt;/code&gt; with the full path of a suitable directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Automation Framework
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.zaproxy.org/docs/automate/automation-framework/"&gt;Automation Framework&lt;/a&gt; (AF) allows you to control ZAP with one yaml file. There are other ways to automate ZAP, but the AF is the recommended approach for most users.&lt;br&gt;
You can create the yaml file in a text editor but it is also possible to create it using the ZAP desktop, as that allows you to test the plan as you go, that is the option we will use here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring your App Using the Spiders
&lt;/h2&gt;

&lt;p&gt;The first thing to do is to explore your app.&lt;br&gt;
Apps designed for humans are typically best explored by humans, but that’s not a good option for automation and eventually scale. &lt;br&gt;
However we will start by doing a bit of manual exploration just to make sure we can connect to the app.&lt;/p&gt;

&lt;p&gt;In the ZAP desktop click on the “Manual Explore” button.&lt;br&gt;
In the next form fill in the URL of your target app, uncheck the “Enable HUD” box and click on “Launch Browser”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gA-TUQCB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7k0epxkbbvlcpb7ua2u1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gA-TUQCB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7k0epxkbbvlcpb7ua2u1.png" alt="OWASP ZAP MANUAL EXPLORE" width="880" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A browser should be launched and display your target app - for the sake of this example we will be using Firefox, but Chrome should work as well, if that is the browser you have installed.&lt;/p&gt;

&lt;p&gt;Now look at the Sites tree - you should see at least one URL from your target app, probably many more. In this case I’m using &lt;a href="https://owasp.org/www-project-juice-shop/"&gt;OWASP Juice Shop&lt;/a&gt; as my target.&lt;/p&gt;

&lt;p&gt;Now we can see how well each of the 2 spiders handles our app.&lt;/p&gt;

&lt;p&gt;Right click on the top node of your app in the Sites tree and select “Attack -&amp;gt; Spider…” &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uq6_VHAB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmfjxdadtb0pt6kukxmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uq6_VHAB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmfjxdadtb0pt6kukxmm.png" alt="OWASP ZAP - ATTACK" width="869" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click “Start Scan"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xmOw7gJc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjb45ey2szuwee752hty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xmOw7gJc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjb45ey2szuwee752hty.png" alt="OWASP ZAP localhost" width="880" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This spider should run very quickly, and it will tell you how many URLs are found and how many were added to the Sites tree:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--41ZN6-bw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9gmmguy9wbna8m4f5qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--41ZN6-bw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9gmmguy9wbna8m4f5qm.png" alt="Image description" width="880" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If your target app is a more traditional app, then this should be sufficient. However, if it is a more modern web app which makes heavy use of JavaScript then you will need to use the AJAX spider.&lt;/p&gt;

&lt;p&gt;Right click on the top node of your app in the Sites tree and select “Attack -&amp;gt; AJAX Spider…” &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JBfyxHDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hhj5t5baobkcvbzofs97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JBfyxHDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hhj5t5baobkcvbzofs97.png" alt="Image description" width="869" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click “Start Scan"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b9LRhlsq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yqv3ghky85fdu9e2dddu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b9LRhlsq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yqv3ghky85fdu9e2dddu.png" alt="Image description" width="880" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This spider will take longer as it launches browsers in order to click on UI elements. It will also tell you how many URLs it found:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yLFOPwdm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/secx1yklnbuqrs0b74jl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yLFOPwdm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/secx1yklnbuqrs0b74jl.png" alt="Image description" width="880" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring your App by Importing API Definitions
&lt;/h2&gt;

&lt;p&gt;If your app just defines an API then you will not be able to explore it either manually or using either of the ZAP spiders.&lt;/p&gt;

&lt;p&gt;If you have an API definition then you can import that via the “Import” menu item. ZAP supports importing the following definitions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open API&lt;/li&gt;
&lt;li&gt;GraphQL&lt;/li&gt;
&lt;li&gt;WSDL&lt;/li&gt;
&lt;li&gt;HAR&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Defining a Context
&lt;/h2&gt;

&lt;p&gt;You do not need to define a context if you are just using ZAP manually, but you do need to define one when using the Automation Framework. Luckily this is easy to do - just right click the top node of your app in the Sites tree and select “Include in Context -&amp;gt; New Context“ &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---OOWlXix--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rupqervnwiuincry7c7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---OOWlXix--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rupqervnwiuincry7c7d.png" alt="OWASP ZAP - Context Menu" width="767" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and optionally give it a meaningful name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M8P7VNDP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wxkv5rgt4yd2ufy8yf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M8P7VNDP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wxkv5rgt4yd2ufy8yf5.png" alt="OWASP ZAP - Context Dialog" width="869" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Plan in the Desktop
&lt;/h2&gt;

&lt;p&gt;We’re now ready to create the plan.&lt;/p&gt;

&lt;p&gt;First, you need to find the “Automation” tab. ZAP has lots of tabs so all but the most essential ones are hidden by default. Click on the green plus tab in the bottom panel and select “Automation”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--48bYX7cD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rxgckkalf9qabug8a4zq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--48bYX7cD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rxgckkalf9qabug8a4zq.png" alt="Image description" width="576" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the new Automation panel click the “New Plan…” button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gF2i2Gyp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjvlb6eqkyct87fr3dyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gF2i2Gyp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjvlb6eqkyct87fr3dyg.png" alt="Image description" width="880" height="135"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the context you defined above and then one of the following Profiles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Baseline - if you just want to passively scan your app and not attack it&lt;/li&gt;
&lt;li&gt;GraphQL - if you have a GraphQL API definition&lt;/li&gt;
&lt;li&gt;OpenAPI - if you have an OpenAPI / Swagger API definition&lt;/li&gt;
&lt;li&gt;SOAP - if you have a SOAP definition&lt;/li&gt;
&lt;li&gt;Full Scan - if you want to attack your app (and do not have an API definition)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zqYD5aIl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqvepr2i8jqb07kqu8i7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zqYD5aIl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqvepr2i8jqb07kqu8i7.png" alt="Image description" width="317" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A summary of your plan will now be displayed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nul4NW69--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xw6v50iqho0rq144qwjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nul4NW69--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xw6v50iqho0rq144qwjl.png" alt="Image description" width="880" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A plan can be much more flexible than these profiles, but they are the best option to use when you are getting started.&lt;/p&gt;

&lt;p&gt;The Baseline and Full scans will include both the &lt;a href="https://www.zaproxy.org/docs/desktop/addons/automation-framework/job-spider/"&gt;spider&lt;/a&gt; and the &lt;a href="https://www.zaproxy.org/docs/desktop/addons/ajax-spider/automation/"&gt;spiderAjax&lt;/a&gt; - you can remove one of them if you find it is not necessary. Do not remove both as then the plan will not explore your app at all!&lt;/p&gt;

&lt;p&gt;If you want to import an API definition in addition to spidering then add the relevant job.&lt;/p&gt;

&lt;p&gt;You can edit a plan in the ZAP desktop:&lt;/p&gt;

&lt;p&gt;Double clicking on any job will bring up a dialog which will allow you to configure it&lt;br&gt;
The “Add job…” button will allow you to add a job to the existing plan&lt;br&gt;
The “Remove Job…” button will remove the selected job&lt;br&gt;
The “Move Job Up” button will move the selected job up one place&lt;br&gt;
The “Move Job Down” button will move the selected job down one place&lt;/p&gt;

&lt;h2&gt;
  
  
  Passive Scanning
&lt;/h2&gt;

&lt;p&gt;ZAP will passively scan every request initiated by ZAP or proxied through it.&lt;br&gt;
The profiles will all add two related jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.zaproxy.org/docs/desktop/addons/automation-framework/job-pscanconf/"&gt;passiveScan-config&lt;/a&gt; - this allows you to fine tune the passive scanner&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.zaproxy.org/docs/desktop/addons/automation-framework/job-pscanwait/"&gt;passiveScan-wait&lt;/a&gt; - the waits until all of the requests have been passively scanned, it should always be run after any jobs which explore your app&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Active Scanning
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.zaproxy.org/docs/desktop/addons/automation-framework/job-ascan/"&gt;activeScan&lt;/a&gt; job runs the active scanner - this performs the actual attacks.&lt;/p&gt;

&lt;p&gt;You should always include this job unless you only want to passively scan your app.&lt;/p&gt;

&lt;p&gt;Double clicking on the job will allow you to fine tune the active scan rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating a Report
&lt;/h2&gt;

&lt;p&gt;The report job not surprisingly has the task of generating a report - there are a variety of templates with different options for you to choose from. See &lt;a href="https://www.zaproxy.org/docs/desktop/addons/report-generation/templates/"&gt;https://www.zaproxy.org/docs/desktop/addons/report-generation/templates/&lt;/a&gt; for the latest list with examples of each.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication
&lt;/h2&gt;

&lt;p&gt;Authentication is hard, and definitely out of scope for this blog post (but we will do our best to write a deep dive on this in the future). However, ZAP can handle pretty much any authentication mechanisms - more details on the &lt;a href="https://www.zaproxy.org/docs/authentication/"&gt;ZAP website&lt;/a&gt;. And the good news is that you can test authentication handling in the ZAP desktop where you can see exactly what is going on, and when you create a plan using that context all of the authentication configuration will be imported into the plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Your Plan Locally
&lt;/h2&gt;

&lt;p&gt;You can run the authentication plan you have created in ZAP using the “Run Plan…” button.&lt;br&gt;
You will then see the status of the jobs as the plan runs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0VsySef5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcqi9wr8162rbmey1rff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0VsySef5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcqi9wr8162rbmey1rff.png" alt="Image description" width="880" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If any of the jobs fail then you can investigate them in the ZAP desktop.&lt;/p&gt;

&lt;p&gt;However jobs may appear to succeed while still not doing what you expect.&lt;br&gt;
Thats is why the AF supports &lt;a href="https://www.zaproxy.org/docs/desktop/addons/automation-framework/tests/"&gt;job outcome tests&lt;/a&gt; - these tests can be added to any job and can do things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check any of the &lt;a href="https://www.zaproxy.org/docs/internal-statistics/"&gt;ZAP statistics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Check is specific alerts are present of absent&lt;/li&gt;
&lt;li&gt;Check if specific URLs have been found and optionally check their content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Statistics tests are added by default to the spider jobs, but you can add more tests to any of the jobs using the “Add Test…” button (there’s also a “Remove Test…” button) and don’t forget that you can double click on any of the tests in the plan in order to edit them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running in CI/CD
&lt;/h2&gt;

&lt;p&gt;You can run AF tests from the command line using a command like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;zap.sh -cmd -autorun plan.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also run them in docker. If you are using the stable image then we recommend updating ZAP using a separate command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable bash -c "zap.sh -cmd -addonupdate; zap.sh -cmd -autorun /zap/wrk/plan.yaml"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;$(pwd):/zap/wrk/:rw&lt;/code&gt; part of the command maps your local CWD to the directory &lt;code&gt;/zap/wrk&lt;/code&gt; in the docker container.&lt;/p&gt;

&lt;p&gt;If you use any scripts in your plan then you will need to make sure they are in or below your local CWD and change your plan so that they are referenced via the location they appear in the docker container (i.e. under &lt;code&gt;/zap/wrk&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  OWASP ZAP + Jit Integration
&lt;/h2&gt;

&lt;p&gt;While ZAP is an extremely powerful tool, it is very much a “point solution” and does not provide any scan scheduling, history or an online interface. Jit is a &lt;a href="https://jit.io"&gt;DevSecOps orchestration platform&lt;/a&gt; which integrates &lt;a href="https://jit.io/zap"&gt;ZAP&lt;/a&gt; and other security tools.  By using Jit you can have your ZAP findings available in an aggregated dashboard with the rest  of your security tooling, and receive greater context for the overall security posture of your application.&lt;/p&gt;

</description>
      <category>owasp</category>
      <category>opensource</category>
      <category>security</category>
      <category>appsec</category>
    </item>
    <item>
      <title>What is Minimum Viable Security (MVS) and how does it improve the life of developers?</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Tue, 05 Jul 2022 13:33:27 +0000</pubDate>
      <link>https://dev.to/jit/what-is-minimum-viable-security-mvs-and-how-does-it-improve-the-life-of-developers-5cf6</link>
      <guid>https://dev.to/jit/what-is-minimum-viable-security-mvs-and-how-does-it-improve-the-life-of-developers-5cf6</guid>
      <description>&lt;p&gt;Last year, Google shook up the cybersecurity and software development community by launching the &lt;a href="https://mvsp.dev/"&gt;Minimum Viable Security Product&lt;/a&gt; (MSVP). &lt;/p&gt;

&lt;p&gt;Developed in collaboration with Salesforce, Slack, Okta and others, MSVP's goal is to create baseline security standardization for third-party software developers, ensuring companies in the supply chain can rely on a minimum level of security practices and standards when building their products.&lt;/p&gt;

&lt;p&gt;For fast-paced startups building software products in the B2B or B2C or even the B2D industry (like ourselves at &lt;a href="https://www.jit.io/"&gt;Jit&lt;/a&gt;), MSVP is great, but it represents a significant development that needs consideration. &lt;/p&gt;

&lt;p&gt;What even is the concept of Minimum Viable Security EXACTLY? And how can developers successfully  learn how to follow and comply with these new high level requirements?&lt;/p&gt;

&lt;p&gt;It’s no news that with the need to deliver software products quickly and continuously, the tech world has seen a shift in operations towards DevOps, DevSecOps and ‘Shift Left Everything’ approaches. These practices were created to support short, iterative and continuous cycles, and also to avoid running quality and security tests as an afterthought and delay the release.   &lt;/p&gt;

&lt;p&gt;But the reality isn’t running as smoothly as the theory. &lt;/p&gt;

&lt;p&gt;Due to multiple issues, many professionals in the industry are starting to think about taking Shift Left practices a step further through 'Born Left.' This happens today mainly in software testing that is entirely owned by the engineering team as a native function, rather than by a siloed QA or Ops teams. &lt;/p&gt;

&lt;p&gt;“Born Left” means that the engineering organization takes full ownership of the testing as part of the processes, known as Continuous Integration (CI), and operations through Continuous Deployment (CD). &lt;/p&gt;

&lt;p&gt;But what about security? &lt;/p&gt;

&lt;p&gt;The natural progression of this strategy puts security next in line, with Continuous Security (CS) becoming an emerging standard.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with shift-left security (or anything else)
&lt;/h2&gt;

&lt;p&gt;The problem with making developers responsible for more and more areas of the software cycle is the potential for overwhelming the team due to added tasks out of their domain expertise, frustrating them and causing them to be delayed with their main coding tasks. Quality, operations, security – the requirements quickly add up, and these domains often require expert knowledge, and this is particularly true when it comes to security. The cybersecurity landscape is constantly evolving with a range of ever evolving new threats to consider and the proliferation of new shift-left security tools designed to combat them.&lt;/p&gt;

&lt;p&gt;Herein lies the problem: how can software based companies achieve &lt;a href="https://www.jit.io/blog/is-balancing-dev-owned-security-and-velocity-possible"&gt;dev-native security&lt;/a&gt; while maintaining development velocity? &lt;/p&gt;

&lt;p&gt;That's where the  Minimum Viable Security (MVS) approach comes into play.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Minimum Viable Security and how does it relate to software development?‍
&lt;/h2&gt;

&lt;p&gt;We are all familiar with, and many of us follow the concept Minimum Viable Product (MVP)—a product is initially built with the minimum set of features needed to test market fit and ensure the business strategy without first expending all the resources. Then, the product continues to be optimized with an MVP mindset, adding minimum viable new features or capabilities every cycle. In fact, many of the most popular software products brought by brands we all know and respect are built that way. &lt;/p&gt;

&lt;p&gt;During the software development, this is done iteratively, focusing on delivering a minimum baseline value with every single version.&lt;/p&gt;

&lt;p&gt;The MVP approach to the product is analogous to the MVS approach to security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72gsnmdqqecn6xzu1k27.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72gsnmdqqecn6xzu1k27.jpeg" alt="The MVP concept, each release is a standalone viable one.&amp;lt;br&amp;gt;
" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The MVP concept, each release is a standalone viable one.&lt;br&gt;
‍&lt;br&gt;
For developers to be willing to take over security responsibilities and fully own them, the process must work like any other aspect they are familiar with: starting small/lean, improving in a continuous and agile manner, automating as much as possible along the way, and running security 'as code.'&lt;/p&gt;

&lt;p&gt;Let’s take this a step further. &lt;/p&gt;

&lt;h2&gt;
  
  
  Minimum Viable Security in Details ‍
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Starting with a Minimum Viable Plan
&lt;/h3&gt;

&lt;p&gt;While different security checklists that can be found all over the web are available to engineering leaders, such as  Google's newly developed MVSP mentioned above, they are hardly helpful if you want to come up with a minimum security plan that is operational. &lt;/p&gt;

&lt;p&gt;It is crucial to always keep in mind the distinction between a high level security checklist and a product-tailored actionable plan.&lt;/p&gt;

&lt;p&gt;A Minimum Viable Security Plan is not a checklist, but a detailed, actionable, step by step plan, that includes all of the processes and needed tools, but most importantly, defines the minimum amount of steps developers should take to make a product secure enough for a specific purpose - just in time. &lt;/p&gt;

&lt;p&gt;For instance, consider a security baseline that is based on a checklist and it codifies the most up-to-date knowledge and strategies designed to deal with specific threats to a company's tech stack. It needs to be as simple as possible, still to cover the entire product boundary, be continuously updated, and follow GitOps principles (with customizable code).&lt;/p&gt;

&lt;p&gt;You can’t expect developers to master such a task without properly equipping them. &lt;/p&gt;

&lt;p&gt;The first obstacle developers face is knowledge. They need to be in the know of the security threat landscape, in addition to the relevant tooling (and there are many of them). They must keep updated, codify the plan and keep the codified plans evergreen. &lt;/p&gt;

&lt;p&gt;Unlike a checklist, an MVS plan should easily codify this knowledge and create the initial capability to continuously and automatically update the product’s security. &lt;/p&gt;

&lt;p&gt;As mentioned above, the plan must also constantly evolve and include additional plans at each stage to support the constantly maturing product.  A serverless plan, for instance, isn’t a SOC2 compliance plan, and isn’t an &lt;a href="https://owasp.org/www-project-top-ten/"&gt;OWASP Top 10&lt;/a&gt; plan and so forth. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rlqv7snjzt32hk4kxv7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rlqv7snjzt32hk4kxv7.png" alt="Jit.io- Minimum Viable Security Plans" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A codified plan is a necessity for developers that are lacking security domain expertise&lt;/p&gt;

&lt;p&gt;Images below taken from the &lt;a href="https://jit.io/"&gt;Jit platform&lt;/a&gt;: a couple of different MVS (minimum viable security) plans available to automatically activate:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3an2kmkjms1604zwbr9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3an2kmkjms1604zwbr9y.png" alt="Jit.io-Security actions within a codified plan&amp;lt;br&amp;gt;
" width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Born left MVS? Security as Code
&lt;/h2&gt;

&lt;p&gt;Identifying and selecting the optimal tools (open source or not) that are required as part of  implementing the plan is a resource   intensive, tedious task that takes a lot of time and effort. Integrating OSS tools into relevant stacks, testing them, and plugging them in to run automatically via CI/CD in a security-as-code format is another heavy task that is also a key part of the born left, dev-owned security mindset.&lt;/p&gt;

&lt;p&gt;On top of that, if you wish to be effective, tool selection and integration must be continuously updated due to the nature of cyber threats and security vulnerabilities. That means it should therefore be fully automated, and properly orchestrated, both as part of the development environment and as part of the pipeline - following the concept of MVS as code.&lt;/p&gt;

&lt;p&gt;If you expect developers to initiate the above on their own, a common problem is overstretching an already busy team. &lt;/p&gt;

&lt;p&gt;Adding new responsibilities in fields where developers aren’t experts take a toll, resulting in 'Shift Left fatigue’ (as seen in many discussions). That’s making the case for a Born-Left approach even more compelling, given that the born-left approach is an alternative, offering the relevant tooling to actually do the heavy lifting including the automation and orchestration. &lt;/p&gt;

&lt;p&gt;To summarize, automating and implementing product security plans as code and following GitOps principles in familiar development environments significantly reduces the Shift Left burden. Image below: The inventory of security actions that are included in specific MVS plans; some are shared across plans:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff52fa8b8edqkcezsv1yb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff52fa8b8edqkcezsv1yb.png" alt="Screen: Jit.io MVS Github experience&amp;lt;br&amp;gt;
" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintaining Velocity and Avoiding Developer Burnout
&lt;/h2&gt;

&lt;p&gt;To meet MVSP requirements while maintaining development velocity and not burning out your developers, adopting an MVS mindset and taking an automated approach to product development is essential.&lt;/p&gt;

&lt;p&gt;This includes automation of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuously updated and constantly evolving MVS plans&lt;/li&gt;
&lt;li&gt;MVS plans-as-code, with security tests generated by multiple tools. &lt;/li&gt;
&lt;li&gt;Integration and orchestration of multiple security controls, in a unified and consolidated interface, as part of the dev environment and pipelines. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many things to consider when it comes to MVS requirements. The tech industry has already united to formalize some  guiding principles and define standardization practices that match the evolving threat landscape, the next step is the implementation as code. &lt;/p&gt;

&lt;p&gt;Feel free to get started here &amp;gt;&amp;gt;  &lt;a href="https://jit.io"&gt;www.jit.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>appsec</category>
      <category>devsecops</category>
      <category>mvs</category>
    </item>
    <item>
      <title>Bootstrapping a Secure AWS as-Code Environment - Your MVS Checklist</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Tue, 22 Mar 2022 17:32:58 +0000</pubDate>
      <link>https://dev.to/jit/bootstrapping-a-secure-aws-as-code-environment-your-mvs-checklist-5bp2</link>
      <guid>https://dev.to/jit/bootstrapping-a-secure-aws-as-code-environment-your-mvs-checklist-5bp2</guid>
      <description>&lt;p&gt;Infrastructure as Code (IaC) has changed the way we manage our cloud operations, by making it infinitely easier and quicker to roll out infrastructure on demand––with a single config file.&lt;/p&gt;

&lt;p&gt;In this article, we’ll delve into both the benefits and security challenges introduced to the underlying stack that comes with adopting an AWS anything-as-code model. We’ll also introduce the &lt;a href="https://www.jit.io/blog/born-left-vs-shift-left-security-and-your-1st-security-developer-architect"&gt;minimum viable security&lt;/a&gt; (&lt;a href="https://www.jit.io/blog/born-left-vs-shift-left-security-and-your-1st-security-developer-architect"&gt;MVS&lt;/a&gt;) approach that delivers baseline security controls for any stack. &lt;/p&gt;

&lt;h2&gt;
  
  
  Embracing an Everything-as-Code Model
&lt;/h2&gt;

&lt;p&gt;In line with the principles of IaC, organizations are increasingly adopting as-code frameworks for different components of a tech stack, including security, policy, compliance, configuration, and operations. AWS supports various as-code frameworks, including their own CloudFormation, Terraform, Pulumi, and have even recently rolled out their next-gen IaC in the form of AWS CDK. By providing an API to provision and manage resources, it’s now possible to spin up a complex cloud architecture by defining simple, code-based templates. &lt;/p&gt;

&lt;p&gt;With environment-as-code pipelines, organizations can then manage and extend their deployment environment across multiple regions and accounts through a single workflow, leveraging the same code. &lt;/p&gt;

&lt;h2&gt;
  
  
  Adopting Minimum Viable Security for Baseline Security Controls
&lt;/h2&gt;

&lt;p&gt;While IaC frameworks come with the benefits of automation across the entire stack for more  rapid delivery and tighter controls, securing each environment comes with its own unique set of challenges. On top of this, with all of the noise and panic being constantly generated around security and exploits, it is hard for organizations that are just starting up to understand the minimum critical controls, and what should be out of scope. The end result being that emerging companies striving to launch the first version of their product, have little understanding of the baseline security they actually need to implement to get ramped up. &lt;/p&gt;

&lt;p&gt;To solve this, the MVS approach offers a vendor-neutral security baseline that reduces the complexity and overhead when deploying infrastructure, and specifically cloud (native) environments. Similar to Agile methods, MVS focuses on a minimal shortlist of critical security controls that adds some initial security to the launched product to tackle the most common threats. This approach helps organizations establish a sufficient security posture while integrating seamlessly into existing automation tooling and pipelines used for configuring today’s complex cloud-based environments.&lt;/p&gt;

&lt;p&gt;In order to demonstrate this in practice, we’ll show how this actually applies and works when securing AWS instances (as the most popular and most widely adopted cloud), through the automated MVS approach. &lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for How to Secure AWS Environments as Code
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://jit.io"&gt;Jit&lt;/a&gt;, we have identified a few layers upon which we focus our security controls for AWS environments that provide the baseline security required that can be expressed as code to automate the bootstrapping of your AWS environments without compromising velocity. &lt;/p&gt;

&lt;p&gt;These include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Account Structure &lt;/li&gt;
&lt;li&gt;Identity and Access Management &lt;/li&gt;
&lt;li&gt;User Creation and Secret Management&lt;/li&gt;
&lt;li&gt;Hierarchies, Governance, and Policies&lt;/li&gt;
&lt;li&gt;Access Controls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below we’ll dive into each individually and how we can automate these eventually within your existing IaC and automated pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build a Secure AWS Account Structure
&lt;/h2&gt;

&lt;p&gt;Many of the practices we will list below are tried and true, and applied at Jit for our own security controls.  First off, we  split our AWS accounts into three primary organizational units (OUs): users, a sandbox, and workloads. &lt;/p&gt;

&lt;p&gt;The users OU let us host a dedicated account to set up all users. &lt;/p&gt;

&lt;p&gt;The sandbox unit is for developing or testing new code changes. This OU can also host accounts for experimenting with as-code templates and CI/CD pipelines. &lt;/p&gt;

&lt;p&gt;Workload units include staging/production environments and contain various accounts that run external-facing services.&lt;br&gt;
As cloud workloads grow, DevOps teams are inclined to set up multiple accounts for rapid innovation and flexible controls. The use of multiple AWS accounts helps DevOps teams achieve isolation and independence by providing natural boundaries for billing, security, and access to resources. &lt;/p&gt;

&lt;p&gt;While building AWS accounts, the below practices are recommended: &lt;/p&gt;

&lt;p&gt;Use organizational units (OUs) to group accounts into logical and hierarchical structures. Accounts should be organized based on similar and related functions instead of an organization’s reporting hierarchy. Although AWS supports OU depth of up to five levels, it is best to maintain the lowest possible structure depth to avoid complexity. &lt;br&gt;
Maintain a master account created for managing all organizational units and related billing for cost control and ease of maintenance. &lt;/p&gt;

&lt;p&gt;Assign limited cloud resources, data, or workloads to an organization’s management or master account for maximum security since the organization’s service control policies (SCPs) do not apply to the management account. &lt;br&gt;
Isolate production and non-production workload environments from each other. AWS workloads are typically contained in accounts, where each account can have more than one workload. Production accounts should have either one or a few closely related workloads. By separating workload environments, administrators can secure production from unauthorized access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Follow Identity and Access Management Practices
&lt;/h2&gt;

&lt;p&gt;For managing access to and permissions for AWS resources, Identity and Access Management (IAM) offers a first line of defense by streamlining the creation of users, roles, and groups. When provisioning an AWS environment through automation, organizations should leverage existing modules to manage IAM users, roles, and permissions.&lt;/p&gt;

&lt;p&gt;Administering robust security through IAM typically relies on a set of common practices that we also apply internally at Jit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure IAM policies for a user, group, or role grant only the permissions needed to accomplish a given task––this approach is also dubbed “least privilege” and there is plenty of excellent material about it. Permissions should initially only contain the least number of privileges required; these can later be increased if necessary. &lt;/li&gt;
&lt;li&gt;Create separate roles for different tasks for each IAM user. &lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html"&gt;session tokens&lt;/a&gt; as temporary credentials for authorization. You should additionally configure a session token to have a short lifetime to prevent misuse in the event of a compromise. &lt;/li&gt;
&lt;li&gt;Do not use a root user’s access key to perform regular activities or any programmatic task, as the root access key grants full access to all AWS services for any resource. You should also rotate the access key of the root user regularly to prevent misuse. &lt;/li&gt;
&lt;li&gt;If account users are allowed to select their own password, make sure there is a strong baseline password policy and the requirement to periodically change it. &lt;/li&gt;
&lt;li&gt;Implement  multi-factor authentication (MFA) for additional security. MFA adds an additional layer of authentication on top of the user credentials and will continue to secure a resource in the event that credentials are compromised.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automate User Creation with Encrypted Secrets
&lt;/h2&gt;

&lt;p&gt;To cut down the risks associated with manual efforts, it is strongly recommended that organizations embrace automation for user creation. This ensures that all stages of the process flow, including account creation, configuration, and assignment to an OU, require minimal manual intervention. &lt;br&gt;
Automation also assists with streamlining user experience by integrating with onboarding and offboarding user workflows. The mechanism provides a fine balance between agility and control by permitting automated configuration and validation of IAM policies across multiple environments (dev, staging, or production).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28b1r0ykco2wqalirqil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28b1r0ykco2wqalirqil.png" alt="AWS IAM Roles" width="512" height="311"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1: A typical user creation process flow (Source: Amazon)&lt;/p&gt;

&lt;p&gt;Apart from user creation, you should also automate identity federation and secret provisioning to ensure a comprehensive user creation cycle. A typical workflow resembles the above process flow, along with leveraging tools such as &lt;a href="https://keybase.io/"&gt;Keybase&lt;/a&gt; for the automatic encryption of credentials and keypairs, supported by IaC frameworks like Terraform. ]&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Hierarchical Structure &amp;amp; Policies with AWS Organizations
&lt;/h2&gt;

&lt;p&gt;AWS Organizations helps you implement granular controls to structure accounts in a manageable way. The service offers enhanced flexibility and hierarchical structure to AWS resources based on organizational units (OUs). For any AWS organization, it is recommended to start with a basic OU structure with core OUs such as infrastructure and security. &lt;br&gt;
You should also create a policy inheritance framework that allows maximum access to OUs at the foundation level and then gradually limits access with each layer of the OU. This layering of policies can further continue to the account and instance levels. &lt;/p&gt;

&lt;p&gt;Organizations should also apply service control policies (SCPs) on the OU rather than individual accounts. SCPs offer a multi-layered approach to access management, as they offer a redundant security check that takes precedence over IAM policies. &lt;/p&gt;

&lt;p&gt;As a best practice, it is recommended to use trusted access for authorizing services across your organization. This mechanism helps to grant permissions to only designated services without affecting the overall permissions of users or roles. As workloads grow, you can include other organizational units based on common themes, such as: policy staging, suspended accounts, individual users, deployments, and transitional accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secure Remote Access to the AWS Console
&lt;/h2&gt;

&lt;p&gt;Securing remote access to the AWS console is one of the easiest yet crucial parts of maintaining security in an AWS as-code environment. A minimal approach here can be achieved by leveraging the AWS Management Console and AWS Directory Service to enforce IAM policies on account switching. Once logged in, based on the user’s role (read-only or read-write access), this approach allows individual users to switch accounts from within the console. &lt;/p&gt;

&lt;p&gt;Additionally, you can also enforce MFA through a trust policy between the user’s account and the target account to ensure only users with MFA enabled can access the target account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enforce Secure Access of AWS APIs
&lt;/h2&gt;

&lt;p&gt;Since the majority of API endpoints are public-facing, it is extremely crucial to secure them. It is always recommended to limit unauthenticated API routes by enforcing a robust authentication and authorization mechanism for accessing the APIs. Apart from leveraging various AWS built-in mechanisms to safeguard both public and private API endpoints, you should also adopt minimal security controls such as enabling MFA to use the AWS CLI or using AWS Vault to secure keypairs.&lt;/p&gt;

&lt;p&gt;Apart from this, there are several approaches to achieve controlled access to APIs. These include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM-based role and policy permissions&lt;/li&gt;
&lt;li&gt;Lambda authorizers&lt;/li&gt;
&lt;li&gt;Client-side SSL certificates&lt;/li&gt;
&lt;li&gt;Robust web application firewall (WAF) rules&lt;/li&gt;
&lt;li&gt;Throttling targets&lt;/li&gt;
&lt;li&gt;JWT authorizers&lt;/li&gt;
&lt;li&gt;Creating resource-based policies to allow access from specific IPs or VPCs&lt;/li&gt;
&lt;li&gt;API keys&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Security - the TL;DR
&lt;/h2&gt;

&lt;p&gt;The as-code model for various computing components allows you to automatically, consistently, and predictably spin up deployment environments using manifest files. While the everything-as-code approach simplifies the deployment and management of resources on AWS, security can’t be ignored as part of this process and should also benefit from the guardrails automation can provide. &lt;/p&gt;

&lt;p&gt;This article delved into the MVS approach and how it can be applied as code. In the next article of this series, we will give concrete code examples of how to bootstrap a secure AWS environment using Terraform in practice.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>5 Open Source Security Tools All Developers Should Know About</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Wed, 26 Jan 2022 08:33:27 +0000</pubDate>
      <link>https://dev.to/jit/5-open-source-security-tools-all-developers-should-know-about-4bhe</link>
      <guid>https://dev.to/jit/5-open-source-security-tools-all-developers-should-know-about-4bhe</guid>
      <description>&lt;p&gt;With product security becoming an increasingly important aspect of software development, “shift left” is gaining wide acceptance as a best practice to ensure security is baked into development early. More and more traditional (read: incumbent) security companies are releasing shift-left products and capabilities, and the practice is becoming almost de facto for engineering teams. &lt;/p&gt;

&lt;p&gt;However, the industry has begun to  realize that simply “shifting left” is hardly enough for a continuous delivery world. High velocity, progressive development teams are embracing a new and trending “born left” security approach, where security aspects - like more and more product-related aspects - are addressed starting from the first line of code.  This means product security isn’t just delivered by the developer team, but is rather owned by them. &lt;/p&gt;

&lt;p&gt;Understanding this shift, comes with the realization that already burdened developers are faced with additional responsibilities that are continuously dropped in their lap.  This has led the industry to hunt for ways to provide solutions and tools that help developers manage this growing workload, including security, and maintain velocity. &lt;/p&gt;

&lt;p&gt;We acknowledge that currently the open source “shift left” tooling doesn’t solve the overhead put on the developers (due to the noise these create, and the burden of learning security in general and the ropes of each open source tool). That is on our shoulders to solve. &lt;/p&gt;

&lt;p&gt;But still, not all open source tools are created equal, and there are quite a few open source security tools that are not-only developer-friendly, but provide much-needed security controls early in the development cycle. That’s why we’ve compiled  a list of 5 security tools that we believe all developers should know about, and consider adopting if they do not have such a control currently in place. This post will provide a quick overview of  what makes a tool developer-friendly. We will then introduce a tool per security domain and its coverage, do a quick dive into how they work, and why you should consider adopting it into your toolchain.&lt;/p&gt;

&lt;h1&gt;
  
  
  What makes a tool developer-friendly?
&lt;/h1&gt;

&lt;p&gt;Let’s start by defining what makes tools ‘dev-friendly’ in the first place. &lt;/p&gt;

&lt;p&gt;To me, a dev-friendly tool sets out to make developers (and dev leaders) lives easier by either simplifying tasks or speeding up processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open source
&lt;/h2&gt;

&lt;p&gt;One of the greatest benefits of open source tools is that they are free to use (of course check the license first!), so there is no need for budget approval, and you can try a tool out locally (you should probably make sure to verify before using it on any company resources - more on that later), without having to commit to it. Instead of lengthy selection processes, you can simply try it out and see how you like it. In addition, and this  is particularly critical for security tools, as the name implies, they provide you access to the entire codebase, so that you don't have any surprises regarding what actions the tool is performing when running it in your environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Runs locally first...
&lt;/h2&gt;

&lt;p&gt;Running code locally from your terminal allows software developers to launch and test code with one simple command. The ability to run a tool locally ensures that you can get immediate feedback and easily tweak the configuration. When launched from a container, you don't even have to bother with possible environment issues related to compilation.&lt;/p&gt;

&lt;h2&gt;
  
  
  ... and then in the CI/CD pipeline
&lt;/h2&gt;

&lt;p&gt;Tools that can be integrated into the CI/CD pipeline have higher value. Once I have used a tool locally and found it to be useful, then my next move would likely be to run it continuously as part of my development lifecycle - and not only on my local machine, consuming my local resources. Of course, once a tool and process is part of the pipeline, the benefits are also extended across the entire dev team and codebase. So, starting locally is an advantage, but then being able to easily integrate the new tool into existing environments and processes is an advantage as well. &lt;br&gt;
‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Part of the developer work environment
&lt;/h2&gt;

&lt;p&gt;Developers should not be wasting time switching between development tools and security tools. All the tools on this list either run in the CI/CD pipeline (e.g. Github Actions) or as a plugin into the IDE. Context switching has been proven to adversely affect flow and productivity.  The less context switching, the greater the development velocity, and happiness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Great Documentation
&lt;/h2&gt;

&lt;p&gt;I believe this requires little explanation...if a user doesn’t know how to use your tool practically then, you’ve gained very little by releasing it.&lt;/p&gt;

&lt;p&gt;Readily available documentation made for dev professionals can make or break a smooth user experience. With great “how-to” documentation, ramp-up time is much shorter.&lt;/p&gt;

&lt;p&gt;The better the documentation, the smoother the learning curve, and the easier to  troubleshoot, making the tool significantly easier to adopt.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Configurable output formats
&lt;/h2&gt;

&lt;p&gt;If you can receive a tool’s output in multiple formats, it then becomes possible to ingest this output by another tool through an API or other form of integration, allowing you to manipulate and analyze results in other tools. If results are only readable by humans, what you can then do with those results is limited and requires human effort - i.e. time that you simply don’t have. &lt;/p&gt;

&lt;p&gt;So without further ado... &lt;/p&gt;

&lt;h1&gt;
  
  
  5 Open Source Security Tools We Love - And You Should Too
&lt;/h1&gt;

&lt;p&gt;Based on the 5 criteria above, I’ve collected five security tools that are dev friendly, and I’ve enjoyed using them as a security engineer.&lt;/p&gt;

&lt;p&gt;The list aims to cover various domains of code analysis tools that should be a part of  minimal requirements for security applied to development processes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Static application security testing (SAST)&lt;/li&gt;
&lt;li&gt;Dynamic application security testing (DAST)&lt;/li&gt;
&lt;li&gt;Hard-coded Secrets detection&lt;/li&gt;
&lt;li&gt;Infrastructure as Code analysis (IaC)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pycharm Python Security Scanner
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://pycharm-security.readthedocs.io/en/latest/"&gt;Pycharm Python Security Scanner&lt;/a&gt; is a security scanner for Python code wrapped as a Pycharm plugin, checking for vulnerabilities while also suggesting fixes. Alongside acting as a comprehensive security scanner, it also offers some additional extensions that can run dependency check analyses as well, which are quite useful.&lt;/p&gt;

&lt;p&gt;What makes it unique is that beyond being a plugin, it is also available as a CI/CD workflow for GitHub Actions in the &lt;a href="https://plugins.jetbrains.com/plugin/13609-python-security"&gt;Github Marketplace&lt;/a&gt;.&lt;br&gt;
‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Semgrep
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://semgrep.dev/"&gt;Semgrep&lt;/a&gt; is a highly-configurable SAST tool that looks for recurring patterns in the syntax tree. It can either run locally using Docker or be integrated into the CI/CD pipeline with Github Actions.&lt;/p&gt;

&lt;p&gt;Results are delivered as JSON files, allowing you to pipe the results into other tools, like jq in order to manipulate them.&lt;br&gt;
‍&lt;/p&gt;

&lt;h3&gt;
  
  
  gitLeaks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/zricethezav/gitleaks"&gt;Gitleaks&lt;/a&gt; is a great project used to quickly detect hard-coded secrets based on a configuration file containing hundreds of built-in regex expressions tailored to find API keys of popular SaaS. It can run locally using Docker and or be integrated into the CI/CD pipeline with Github Actions. Results are delivered in various formats. &lt;/p&gt;

&lt;p&gt;The rules can be easily extended to match your internal patterns and homegrown tools as well.‍&lt;/p&gt;

&lt;h3&gt;
  
  
  ZAP
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://owasp.org/"&gt;OWASP&lt;/a&gt;'s &lt;a href="https://www.zaproxy.org/blog/2020-05-15-dynamic-application-security-testing-with-zap-and-github-actions/"&gt;Zed Attack Proxy (ZAP)&lt;/a&gt; is another open source tool, used for dynamic scanning (DAST) built by the OWASP team (the same folks who gave us the Top 10 Security Vulnerabilities). It can run locally using Docker and provides a Github workflow to run in the CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;The common output for this tool is an HTML report, but you can also receive the output in JSON with an addon.&lt;/p&gt;

&lt;h3&gt;
  
  
  KICS
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://checkmarx.com/product/opensource/kics-open-source-infrastructure-as-code-project/"&gt;KICS&lt;/a&gt; is used to perform code static analysis of infrastructure, and includes about 1,400 rules supporting various platforms like Terraform, CloudFormation, Ansible or Helm Charts. It can run locally using Docker and can be integrated into the CI/CD pipeline with Github Actions. &lt;/p&gt;

&lt;h2&gt;
  
  
  High Velocity Development and Security
&lt;/h2&gt;

&lt;p&gt;Development teams are being tasked with end-to-end responsibility and ownership of their products - whether it’s production readiness, performance or security, while all along there’s the pressure to ship code to production with high velocity. &lt;/p&gt;

&lt;p&gt;This growing challenge is what set us out on a mission at &lt;a href="https://www.jit.io/"&gt;Jit&lt;/a&gt;, to ease this growing burden on developers making the ownership of product security much simpler - from the planning, through open source orchestration and more, based on an &lt;a href="https://www.jit.io/post/born-left-vs-shift-left-security-and-your-1st-security-developer-architect"&gt;MVS approach&lt;/a&gt; (Minimum Viable Security). Basically this manifesto says start small, and constantly iterate, you don’t need to build a fortress on Day 1, but have baseline security controls, and grow from there.&lt;/p&gt;

&lt;p&gt;As I mentioned above, while dev-friendly security tools offer great benefits, the growing responsibility assigned to developers requires a shift in today’s approach - one that requires a minimum viable mindset and automated orchestration, so that devs will be able to own product security without compromising velocity.&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>devops</category>
      <category>security</category>
      <category>devsecops</category>
    </item>
  </channel>
</rss>
