<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David Melamed</title>
    <description>The latest articles on DEV Community by David Melamed (@dvdmelamed).</description>
    <link>https://dev.to/dvdmelamed</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dvdmelamed"/>
    <language>en</language>
    <item>
      <title>GenAI-Powered Digital Threads - AI Security Under the Hood, Part II</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Sun, 24 Mar 2024 14:50:18 +0000</pubDate>
      <link>https://dev.to/aws-builders/genai-powered-digital-threads-ai-security-under-the-hood-part-ii-5gk1</link>
      <guid>https://dev.to/aws-builders/genai-powered-digital-threads-ai-security-under-the-hood-part-ii-5gk1</guid>
      <description>&lt;p&gt;In our previous blog post on &lt;a href="https://www.jit.io/blog/genai-powered-digital-threads-part-1"&gt;AI Security&lt;/a&gt;, we spoke about borrowing the concept of Digital Threads from the manufacturing world, in order to aggregate disparate company data into a single source––a knowledge graph.  This knowledge graph can provide us with important security context through transparency and visibility of our organization's many different data sources, driving greater AI-powered security, risk management, and mitigation.  &lt;/p&gt;

&lt;p&gt;When we understand the source of risk, such as which repositories are actually running in production, or whether machines are exposed to the public web, we are better equipped to prioritize security risk and remediation for our organization. With alert fatigue growing across all engineering disciplines from DevOps to QA to Security, we need to work towards minimizing the noise and focusing on what really brings our organization value. This is the exact benefit that a human-language queryable graph database can deliver to our already bogged-down engineering teams.&lt;/p&gt;

&lt;p&gt;In this post, we’ll dive into the technical specifics under the hood of the graph database, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Graph database architecture&lt;/li&gt;
&lt;li&gt;Tools that helped launch the application&lt;/li&gt;
&lt;li&gt;Notebooks used to build the knowledge graph&lt;/li&gt;
&lt;li&gt;GenAI model that enabled the querying through human language that also outputs results in human language&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This technical example was built upon an AWS AI service suite to test its capabilities, and it was pretty impressive, with minimal learning curve for the AI enthusiast. This example leverages &lt;a href="https://aws.amazon.com/neptune/"&gt;Neptune&lt;/a&gt; as the graph database, &lt;a href="https://aws.amazon.com/bedrock/claude/"&gt;Bedrock’s Claude v3&lt;/a&gt; for our GenAI model and LLM, along with out-of-the-box security notebooks, to populate the data. This coupled with excellent docs and some tinkering helped wire the example into common open-source tools like &lt;a href="https://www.langchain.com/"&gt;Langchain&lt;/a&gt;, and programming languages like &lt;a href="https://opencypher.org/"&gt;openCypher&lt;/a&gt; to test out GenAI-powered context-based security in action.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Graph Database Architecture
&lt;/h2&gt;

&lt;p&gt;In this example, we start with a &lt;a href="https://github.com/dvdmelamed/genai-kg"&gt;Github repository&lt;/a&gt; with two folders. You’ll find a &lt;a href="https://github.com/dvdmelamed/genai-kg/blob/main/terraform/main.tf"&gt;Terraform&lt;/a&gt; file in one folder that enables you to bring up an end-to-end Neptune environment, which is particularly useful for those just getting started (as I had to learn to configure this from scratch).  This Terraform will spin up a cluster, and an EC2 proxy to connect locally. Neptune comes packaged with several notebooks that also include data sources, where one such notebook is called “security graph”, which includes a dataset that was leveraged for this example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl81oluqg9jast85yhb00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl81oluqg9jast85yhb00.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The notebook is the basis from which the data is ingested into the graph. This enables you to query for all of the specific data available in the graph. A nifty feature is that inside the notebook you can also have the option to visualize the data as a graph, so you can understand the relationships between the data sources.&lt;/p&gt;

&lt;p&gt;In this sample app and demo, one additional step was added, which was to add an EC2 instance into the same subnet, in order to be able to communicate with Neptune remotely, because by default it is inside the VPC and it is not otherwise accessible remotely. Once we have these tools and our cluster set up, it’s time to connect all of this to GenAI and see what happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Querying our Knowledge Graph 
&lt;/h2&gt;

&lt;p&gt;Open an SSH tunnel to connect to your Neptune cluster locally. For that use the address from the Terraform output and make sure to add it to your /etc/hosts file pointing to 127.0.0.1, using the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh -i neptune.pem -L 8182:&amp;lt;db-neptune-instance-id&amp;gt;.us-east-1.neptune.amazonaws.com:8182 &amp;lt;EC2 Proxy IP&amp;gt;&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Now it is possible to run the application. The application starts with a default question “Which machines do I have running in this data set?”&lt;/p&gt;

&lt;p&gt;Behind the scenes, it runs the following Cypher query &lt;code&gt;MATCH m: ec2:instance&lt;/code&gt; &lt;code&gt;RETURN m&lt;/code&gt;.  But how does it know to run this specific query? The database schema is included in the question, which is then converted into a Cypher query. Not only will you see the data output as JSON in the CLI, but if you return to the application, you will also see the generated response in natural human language. &lt;/p&gt;

&lt;p&gt;And this is just one example. &lt;/p&gt;

&lt;p&gt;Eventually it is possible to query the application and receive a relevant response to any data points available in the knowledge graph.  So if we were to take a security example, and how we can achieve better AI-based security for our systems, we can find out if any of these machines have any public IPs or get a list of all the types of cloud resources we have in our stack (an inventory), which it will be able to aggregate based on the node labels –– which is essentially a list of all of the different object types we have in our graph.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Word on Limitations &amp;amp; Good Practices for Optimal Results
&lt;/h2&gt;

&lt;p&gt;When I first started playing around with this stack testing different AI models, it worked well until one of the queries was too large (meaning the prompt that includes the graph database schema exceeded the context window), and the library threw an error.  It also happened to me with some other questions I threw to the engine and the resulting OpenCypher query was incorrect (syntax error). This brings up the question of which model should we use?&lt;/p&gt;

&lt;p&gt;There are several models, but not all work in the same way. In this example, we leveraged Bedrock and compared several models. Claude 3 delivered the best results for this example, as did GPT-4 from OpenAI. So if you want to get started and play around, these are the two recommended models. &lt;br&gt;
Benchmarking and fine-tuning the AI models is a must as some will perform better than others based on your use case.&lt;/p&gt;

&lt;p&gt;In the library itself, it’s possible to play around with a diversity of parameters, including the type of prompts sent to the engine, where a certain amount of fine-tuning will certainly impact the quality of the results.  &lt;/p&gt;

&lt;p&gt;In addition, the more you enrich the graph with data, the richer queries you’ll be able to get, and the more value you’ll receive from each query. By connecting the graph to third-party resources, you can receive additional business context. For example, being able to query an HR system about active employees, and see who still has access to different cloud resources.&lt;br&gt;
  &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Neptune + Bedrock for Better AI Security
&lt;/h2&gt;

&lt;p&gt;In this example, we demonstrated how with a simple AWS-based stack of Neptune and Bedrock, leveraging common OSS libraries like Langchain and Streamlit, it was possible to build a knowledge graph that simulates the same value as a digital thread in manufacturing. By consolidating disparate organizational data into a single graph database, we can then leverage the power of GenAI to query this data, receive accurate results based on this important organizational and system context, and all in human language to minimize the learning curve of acquiring a new syntax or analyzing complex raw JSON results.&lt;/p&gt;

&lt;p&gt;The more we converge data, and are able to visualize the relationship between the different data sources, the more we can have a rapid understanding of issues when they arise - from breaches to outages. This has immense benefits for engineering, security, and operations alike, enabling us to have a richer context when it comes to the root cause with greater data and context-driven visibility into our systems.  &lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>GenAI-Powered Digital Threads - A Novel Approach to AI Security, Part I</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Mon, 18 Mar 2024 18:46:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/genai-powered-digital-threads-a-novel-approach-to-ai-security-part-i-560g</link>
      <guid>https://dev.to/aws-builders/genai-powered-digital-threads-a-novel-approach-to-ai-security-part-i-560g</guid>
      <description>&lt;p&gt;Engineering organizations today are becoming increasingly data-reliant.  All of our tools and stacks accrue large amounts of data that are distributed among tools and platforms––from our code and our repos, to our specs and requirements, CI/CD workflows, governance and policies, configurations across clouds, environments, and everything else. This growing amount of data is continuously used throughout our software development lifecycle (SDLC).&lt;/p&gt;

&lt;p&gt;While organizations become greater data producers and consumers, organizations increasingly are becoming data-driven to make educated decisions with real business impact.  However, much like Conway’s Law, as our teams grow more distributed, so do our systems, and ultimately as a byproduct, the data these systems produce. We have data scattered across our organization and stacks.  &lt;/p&gt;

&lt;p&gt;Why does this matter?&lt;/p&gt;

&lt;p&gt;If our data were able to become more consolidated, with greater communication and visibility we could understand a lot more about cause and effect in our systems, and make decisions that apply to real problems and challenges we’re facing.  For example, if we know which repositories are actually running in production, from a security perspective we can know which systems are actually exposed and pose risk to our organization. We can understand what is actually happening in our environments and make the right decisions at the right time.&lt;/p&gt;

&lt;p&gt;This is because eventually as evolved as security tools have become over the years, these tools still treat all the parts they are intended to scan or monitor the same. This means that all repos are treated equally, all parts of the code, infrastructure, and anything else. This causes these tools to not only be very noisy adding a lot of complexity without having sufficient context, having a hard time distinguishing between real threats and alerts we can really just ignore (or not receive them at all, to begin with). This also creates a lot of cognitive load associated with managing security at scale –– and where it's valuable to invest effort, and what we can skip.&lt;/p&gt;

&lt;p&gt;The data however is all there.  It’s just a matter of connecting these distributed and scattered data points, to provide us with more helpful and context-based insights about our systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Borrowing from the World of Manufacturing (Again)
&lt;/h2&gt;

&lt;p&gt;In the same way that DevOps borrowed assembly line concepts to streamline development through operations, by removing friction in workflows and pipelines, and adding much-needed automation, there is plenty more to learn from manufacturing to apply to technology concepts.  Another concept popular in the world of manufacturing is digital threads.  &lt;/p&gt;

&lt;p&gt;This can basically be summarized as ​​a closed loop between digital and physical worlds in order to help optimize products, people, processes, and places. (You can read more about this &lt;a href="https://www.ptc.com/en/blogs/corporate/what-is-a-digital-thread"&gt;here&lt;/a&gt;).  Connecting these different “endpoints” or threads enables you to have a much more holistic and comprehensive view of your business. This helps to answer the right questions.&lt;/p&gt;

&lt;p&gt;Let’s take the example of a product defect. In order to track down why this product is being produced with a defect, by connecting data from different departments, disciplines, machines, and resources, it’s possible to understand whether the defect originated in the requirements and design, the Engineering, or in the execution and production. (Starting to see the similarities with our technology stacks?)&lt;/p&gt;

&lt;h2&gt;
  
  
  Knowledge Graphs for Technology Context
&lt;/h2&gt;

&lt;p&gt;If we take a look at technology stacks, the place that would connect all of our disparate worlds of data is called a knowledge graph, which can be built into a graph database. A graph database is a tool that is very helpful in aggregating data from multiple data sources into a single unified place.  &lt;br&gt;
Below is an example of what an engineering knowledge graph would look like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjzcsfpwz1nzalfp0xsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjzcsfpwz1nzalfp0xsf.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is an example model of a service application that gives us a good understanding of the essential parts of our systems, and the data they produce, consume, and store.  In this model, you can see that there is a Github team that owns several repositories. One of the repositories deploys a lambda function through a GitHub Action, where there is a workflow that sits in the repository, and is exposed to the internet because there’s an API gateway in the middle that exposes some endpoints.&lt;/p&gt;

&lt;p&gt;With such a knowledge graph and diagram, it’s quite easy to distinguish that one repository is exposed to the internet and has a production impact, while the other does not. The hard part is to actually build this graph, particularly because as the graph grows, it’s harder to control the data and queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  GenAI + Graph Database for Human Language Data Management
&lt;/h2&gt;

&lt;p&gt;With generative AI (GenAI) becoming all the rage, with a diversity of applications across organizations, it’s no surprise then that another useful application for GenAI is querying useful digital threads compiled in a consolidated knowledge graph. If we’d like to be able to leverage the knowledge graph, without having to learn the entire syntax or language of the graph, as unfortunately there isn't yet one single standard, enter GenAI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is GenAI interesting in this context?
&lt;/h3&gt;

&lt;p&gt;Well, for starters, GenAI is smart.  It’s particularly useful with being taught specific tasks, and evolving this capability. This means we can teach GenAI how to create queries for our graph database.  &lt;/p&gt;

&lt;p&gt;The knowledge graph provides us with the foundations to teach GenAI how to query it in human language. GenAI is known to have hallucinations and sometimes be creative with its answers. However, when coupled with a knowledge graph using structured data, it can have nearly 100% data accuracy. &lt;/p&gt;

&lt;p&gt;Without this kind of accurate data together with relevant information for example about the type of environment (i.e. dev vs. prod), responses are rarely context-based, and largely vague or generic –– making understanding risk and mitigation much more difficult. This approach therefore basically brings the benefit of both worlds together, where we can have a way higher confidence in the data, but also ask and receive answers in human language, related to our very own stacks. &lt;/p&gt;

&lt;p&gt;We can have GenAI orchestrate the querying of the model, which then, in turn, queries the graph, and ultimately can translate the intent from the knowledge graph created to useful and human language output. So, how do we do this without receiving raw JSON?&lt;/p&gt;

&lt;p&gt;In this example we leveraged &lt;a href="https://aws.amazon.com/neptune/"&gt;AWS Neptune&lt;/a&gt; coupled with GenAI –– this can be done with OpenAI or one of the models available in &lt;a href="https://aws.amazon.com/bedrock/"&gt;Amazon Bedrock&lt;/a&gt; (in our example, we picked the brand new &lt;a href="https://aws.amazon.com/bedrock/claude/"&gt;Claude 3&lt;/a&gt;), where the value add of leveraging the AWS stack is that it comes with several models out of the box. The core of the graph database is built upon the open-source library Langchain, a library containing a module dedicated to query graph databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  We’re Just Getting Started
&lt;/h2&gt;

&lt;p&gt;This example was built and run pretty simply with a few out-of-the-box tools made available through the newly minted AWS suite of AI services, from Neptune to Bedrock, built to work natively together, and with the open-source AI ecosystem. These worked pretty well to help create a first sample app, and queryable knowledge graph in a digital threads approach, enabling the extraction of important data points in human language for greater context-driven security powered by AI.&lt;/p&gt;

&lt;p&gt;In our next post, we’ll dive into the architecture and technical resources used to make this possible, walking through an example built upon AWS Neptune with Bedrock, and the open-source tools Langchain and Streamlit.  This is a replicable example that will enable you to get started and test drive how you can and should do this at home (just mind the cost!), and gain better insights into your organizational security.&lt;/p&gt;

&lt;p&gt;Stay tuned for the second more technical post in this two-part series.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>genai</category>
      <category>security</category>
      <category>ai</category>
    </item>
    <item>
      <title>A Deep Dive into OCSF &amp; VEX - Unified Standards for Security Management</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Wed, 21 Feb 2024 07:31:34 +0000</pubDate>
      <link>https://dev.to/aws-builders/a-deep-dive-into-ocsf-vex-unified-standards-for-security-management-20fl</link>
      <guid>https://dev.to/aws-builders/a-deep-dive-into-ocsf-vex-unified-standards-for-security-management-20fl</guid>
      <description>&lt;p&gt;Over the last year, AWS has announced its &lt;a href="https://www.jit.io/blog/amazon-security-lake-centralized-data-management-for-modern-devsecops-toolchains"&gt;Security Data Lake&lt;/a&gt;, aimed at providing a unified, flexible, and scalable data lake for security data sources.  The backbone of this service is the open-source &lt;a href="https://github.com/ocsf"&gt;OCSF Framework&lt;/a&gt; (the Open Cybersecurity Framework), launched by Splunk and built upon Symantec’s ICD Schema which is the core to providing a vendor-agnostic format for security data management.&lt;/p&gt;

&lt;p&gt;With the advancement in all areas of security - cloud-native security, application security, DevSecOps and more––many exciting tools have emerged to provide robust security on the many layers of our product stacks.  However, with this incredible evolution, comes the challenge of uniformity.  Each tool comes with its own proprietary syntax, format, schema, and more.  This becomes increasingly difficult to manage at scale with disparate dashboards that make it a complex undertaking to correlate and cross-reference information to make security decisions with the required context. &lt;br&gt;
 ​​&lt;br&gt;
With context-based security becoming a central topic when it comes to applying good security practices and hygiene, it’s imperative to consolidate the data from these many sources into a single uniform data source and structure.  As a &lt;a href="https://jit.io/?_gl=1*4jekc5*_gcl_au*MTc2NzQxOTM3Mi4xNzA3MDU1Mjk1"&gt;DevSecOps orchestration platform&lt;/a&gt; that is intended to secure an entire product stack - we encountered just this same challenge when looking to normalize and unify data into a single dashboard. &lt;/p&gt;

&lt;p&gt;When you need to perform an interchange of information or data, what is often encountered is that the formats don’t communicate and this alignment is critical. OCSF will provide this important unification layer and schema to power a modern security data lake, alongside additional industry standards that are emerging like &lt;a href="https://www.cisa.gov/sites/default/files/2023-01/VEX_Use_Cases_Aprill2022.pdf"&gt;Vulnerability Exploitability eXchange&lt;/a&gt; (AKA VEX), which is providing much-needed alignment of vulnerability management and context, alongside the benefits of OCSF. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is OCSF?
&lt;/h2&gt;

&lt;p&gt;The Open Cybersecurity Schema Framework is an open-source project, “delivering an extensible framework for developing schemas, along with a vendor-agnostic core security schema”. It’s intended to enable both vendors and data producers to adopt and extend the schema –– making them more applicable in domain-specific contexts.  This then makes it possible for data analysts, engineers, researchers, or anyone else who needs to manipulate the data to map differing schemas to create a common language for threat detection and investigation. &lt;/p&gt;

&lt;p&gt;The framework consists of a set of data types, an attribute dictionary, and a taxonomy that is not restricted to the cybersecurity domain nor to events, however, the initial focus is for these to enrich the visibility, causality, and correlation of cybersecurity events. &lt;/p&gt;

&lt;p&gt;The intent of OCSF is to make it possible to unify data from many data sources in use today - whether AWS sources and lakes like CloudTrail and S3 or services such as lambda, Route53, and its logs, alongside on-prem and SaaS providers.  By creating a unified schema and format it becomes possible to centralize and then analyze security data at scale.  The end goal is to provide a simplified and vendor-agnostic taxonomy to remove the overhead of data normalization for security data analysis&lt;/p&gt;

&lt;p&gt;With a single standard for security data, it now becomes programmable and queryable in ways that weren’t possible before––which is a huge benefit for developers.  That is why it is no surprise that some of the largest security vendors in the industry are getting involved in this project––with initial contributions from major players including Cloudflare, Crowstrike, IBM, JupiterOne, Okta, Palo Alto Networks, Rapid7, Salesforce, Sumo Logic, TrendMicro and ZScaler.&lt;/p&gt;

&lt;h2&gt;
  
  
  OCSF - Digging Deeper
&lt;/h2&gt;

&lt;p&gt;In order for OCSF to become a truly useful vendor-agnostic schema, it needs to support a wide range of data sources, and this is exactly what it set out to do.  The OCSF framework currently supports a number of event classes relevant to cloud-native ops including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;File System&lt;/strong&gt;: Kernel, Memory, Scheduled job, Process&lt;/li&gt;
&lt;li&gt;Security Findings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Admin Events&lt;/strong&gt;: i.e. account_change, authn, authz, group_mgt&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Activity&lt;/strong&gt;: https, dns, dhcp, rdp, smb, ssh, ftp, email&lt;/li&gt;
&lt;li&gt;Device Inventory&lt;/li&gt;
&lt;li&gt;App Lifecycle&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Activity&lt;/strong&gt;: e.g. web resources access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is just a snapshot of the different event classes that OCSF is able to support, and if we take a look at a single finding, we’ll see why this emphasizes the problem.&lt;br&gt;
A single security finding will often need to track many different types of information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finding Details: Remediation/events&lt;/li&gt;
&lt;li&gt;Attack: Tactics and Techniques&lt;/li&gt;
&lt;li&gt;Compliance details&lt;/li&gt;
&lt;li&gt;Enrichments &lt;/li&gt;
&lt;li&gt;Malware: CVE, CVSS&lt;/li&gt;
&lt;li&gt;Metadata: Product / Feature&lt;/li&gt;
&lt;li&gt;Observables: IP, Geo&lt;/li&gt;
&lt;li&gt;Process: Attr/File/Parent Process/User/Session&lt;/li&gt;
&lt;li&gt;CIS Control&lt;/li&gt;
&lt;li&gt;Kill Chain&lt;/li&gt;
&lt;li&gt;Resources&lt;/li&gt;
&lt;li&gt;Analytics: Rule, behavioral, stats, learning&lt;/li&gt;
&lt;li&gt;Vulnerabilities: CVE/CWE/Package&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gsmrcwx11xi28jjctu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gsmrcwx11xi28jjctu3.png" alt="Image description" width="680" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While all the fields are optional, this can also be a double-edged sword.  On the one hand this makes the format very flexible, which means the upside is that you can enrich the finding information with a lot of data. The downside is that when you query data because the data is optional, it may not turn up relevant data when the data is not included or enriched.&lt;/p&gt;

&lt;p&gt;Now imagine having this amount of data for tons of vulnerabilities and alerts from myriad services, sources, and tools - how do we then correlate, contextualize, and cross-reference this data to make better security remediation decisions? This is compounded when engineering resources are already limited and bogged down.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is VEX?
&lt;/h2&gt;

&lt;p&gt;VEX (Vulnerability Exploitability eXchange), is introducing a common and universal way for products to be able to identify whether vulnerable components actually affect their own product’s stack and its potential for exploitability. This now provides a unified way for tooling and vendors to align around SBOM (software bill of materials) and supply chain security which is becoming an increasingly critical part of securing our end-to-end stacks.  With new vulnerabilities being discovered daily, developers are not able to truly gain the value they require from their SBOM, without understanding what is really being used in an ongoing manner, and is directly impacted in the event of a vulnerability being discovered.&lt;br&gt;
VEX helps organizations understand which vulnerabilities are relevant to their systems, enabling them to prioritize their response efforts effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  OCSF + VEX for More Streamlined Security
&lt;/h2&gt;

&lt;p&gt;By incorporating VEX information within the OCSF schema, organizations can more easily share detailed information about the exploitability of specific vulnerabilities in different products. This can lead to more informed decision-making processes regarding patch management and vulnerability mitigation strategies.&lt;/p&gt;

&lt;p&gt;OCSF can facilitate the exchange of VEX data across different cybersecurity tools and platforms. This ensures that vulnerability exploitability information is readily accessible and usable by all parts of an organization's cybersecurity infrastructure, from threat intelligence platforms to security information and event management (SIEM) systems.&lt;/p&gt;

&lt;p&gt;The combination of VEX and OCSF can streamline cybersecurity operations by reducing the need for custom integration efforts. With standardized formats for sharing exploitability information, organizations can save time and resources, focusing more on addressing vulnerabilities rather than managing data inconsistencies.&lt;/p&gt;

&lt;p&gt;Integrating VEX data into the OCSF framework can improve risk assessment capabilities by providing a more comprehensive view of an organization's security posture. Knowing which vulnerabilities are actually exploitable in the context of their environment helps security teams prioritize risks more effectively.&lt;/p&gt;

&lt;p&gt;If we take an example from the AWS suite of tools, it’s now possible to leverage OCSF as a unified standard via the AWS Security Lake, enabling the many data producers and vendors around the globe to have a universal standard for sharing and interchanging information that is relevant for security gathering, detection, analysis, prioritization, and ultimately remediation.  &lt;/p&gt;

&lt;p&gt;AWS is betting big on OCSF as the leading standard for security data management, along with many other leading vendors in the industry. The &lt;a href="https://openssf.org/"&gt;OpenSSF&lt;/a&gt; with its &lt;a href="https://github.com/openvex"&gt;OpenVEX&lt;/a&gt; project, and other tools that are leveraging VEX, everything from &lt;a href="https://github.com/aquasecurity/trivy"&gt;Trivy&lt;/a&gt; to &lt;a href="https://github.com/kubescape/kubescape"&gt;Kubescape&lt;/a&gt;, is making it an emerging leading standard for vulnerability prioritization and assessment.  These together, represent a step towards more unified, efficient, and effective cloud security practices. By facilitating the sharing of crucial vulnerability exploitability information alongside security data in a standardized format, organizations can enhance their ability to respond to and mitigate cyber threats in a timely and coordinated manner.&lt;/p&gt;

&lt;h2&gt;
  
  
  OCSF a Security Game Changer or Another Passing Trend?
&lt;/h2&gt;

&lt;p&gt;With major players &amp;amp; foundations in the industry heavily involved and invested in both the OCSF and VEX  frameworks, they have the buy-in and backing to be a game changer for the security engineering ecosystem. However, we will need to wait and see if the adoption grows sufficiently for OCSF &amp;amp; VEX to truly provide tangible value for security engineering. &lt;br&gt;
 &lt;br&gt;
They certainly have the potential, and we’re hopeful that greater unification and interchange of data will be possible across the security toolchain, to ensure our decisions are made based on actual, important, and relevant context––giving us greater confidence overall in our security programs and coverage.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>devsecops</category>
      <category>cloudsecurity</category>
    </item>
    <item>
      <title>Amazon Security Lake: Centralized Data Management for Modern DevSecOps Toolchains</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Thu, 08 Feb 2024 16:55:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-security-lake-centralized-data-management-for-modern-devsecops-toolchains-40p1</link>
      <guid>https://dev.to/aws-builders/amazon-security-lake-centralized-data-management-for-modern-devsecops-toolchains-40p1</guid>
      <description>&lt;p&gt;AWS introduced its &lt;a href="https://aws.amazon.com/security-lake/"&gt;Amazon Security Lake&lt;/a&gt; service in May 2023, as the heir to &lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html"&gt;AWS CloudTrail Lake&lt;/a&gt;, a new data lake that serves to augment a lot of the capabilities, services, sources, analysis, and transformation the CloudTrail data lake can provide for security management.  When doing the research on this service which is gaining adoption, I stumbled upon the roundup below, which provides a good comparison between the two services. In this post, I’d like to dive into the AWS Security Lake capabilities, why this is an excellent new service for AWS-based operations for powering up your security engineering, and wrap up with a useful example of how to get started. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhqjeqsby58sijixm0zj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhqjeqsby58sijixm0zj.png" alt="Image description" width="483" height="512"&gt;&lt;/a&gt;&lt;br&gt;
Source: &lt;a href="https://isaaczapata.notion.site/Data-Lake-Dilemma-Amazon-Security-Lake-vs-AWS-CloudTrail-Lake-54ce57e4045b4de5adedc3e3696eead7"&gt;https://isaaczapata.notion.site/Data-Lake-Dilemma-Amazon-Security-Lake-vs-AWS-CloudTrail-Lake-54ce57e4045b4de5adedc3e3696eead7&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Do We Need Another Data Lake?
&lt;/h2&gt;

&lt;p&gt;If we look at the current AWS service catalog, there are quite a number of data sources we leverage on a day-to-day basis to power our cloud operations –– S3, CloudTrail, Route53, VPC, AWS lambda, Security Hub, as well as third-party tooling and services. All of these data sources rely on different and proprietary formats and fields. Being able to normalize this data will make it possible to provide additional capabilities on top, such as dashboarding and automation–which is becoming increasingly important for security management and visibility.&lt;/p&gt;

&lt;p&gt;This is something we learned early on when building our own &lt;a href="https://jit.io"&gt;DevSecOps platform&lt;/a&gt; that ingests data from multiple tools and then visualizes the output in a unified dashboard.  Every vendor and tool has its own syntax and proprietary data format.  When looking to apply product security in a uniform way, one of the first challenges we encountered was how to normalize and align the data from several best-of-breed  tools into a single schema, source and platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgceekdo6wrhegyy1au8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgceekdo6wrhegyy1au8.png" alt="Image description" width="512" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our cloud operations today are facing the same challenge.  The question is - how do we do this at scale?  &lt;/p&gt;

&lt;p&gt;This is exactly what the security data lake comes to solve.&lt;/p&gt;

&lt;p&gt;Amazon Security Lake provides a unification service that knows how to ingest the logs and data from myriad sources––whether native AWS services, integrated SaaS products or internal, homegrown custom sources or even on-prem, takes these data sources’ output from the unified format called &lt;a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html"&gt;ASFF&lt;/a&gt; (AWS Security Finding Format) into parquet using &lt;a href="https://github.com/ocsf"&gt;OCSF&lt;/a&gt; schema framework’s format, which is the backbone of Amazon Security Lake, and stores them into S3.&lt;/p&gt;

&lt;p&gt;AWS is betting heavily on OCSF, which is an open source framework launched by Splunk and built upon Symantec’s ICD Schema, that AWS is contributing to significantly. OCSF provides a vendor-agnostic, unified schema for security data management.  The idea is for the OCSF format to provide a framework for data security management that organizations today require.  &lt;/p&gt;
&lt;h2&gt;
  
  
  Getting Started: Security Data Lake in Action
&lt;/h2&gt;

&lt;p&gt;Once the data is normalized and unified into the OCSF schema - which can be achieved by leveraging an ETL service like &lt;a href="https://aws.amazon.com/glue/"&gt;Glue&lt;/a&gt;, it is then partitioned and stored in the parquet format in S3, and any number of AWS services can be leveraged for additional data enrichment. These include Athena for querying the data, OpenSearch for search and visualization capabilities, and even tools like SageMaker for machine learning to detect patterns and anomalies.  &lt;/p&gt;

&lt;p&gt;You can even bring your own analytics and BI tools for deeper analysis of the data.  This security data is ingested from the many sources supported by the flexible format that is column-based. This also makes it economical, and bypasses the need to mount the entire query in-memory, making it possible to connect it to analytics and BI tools as a subscriber, on top of the lake. (A caveat: the service itself is free, but you will pay on a consumption basis for all the rest of the AWS tooling–S3, Glue, Athena, SageMaker, ...).&lt;/p&gt;

&lt;p&gt;Another important benefit is for compliance monitoring and reporting on a global scale.  This data lake makes it possible for organizations with many engineering groups and regions to apply this service globally. Therefore, engineering organizations with many accounts and regions will not have to configure this 50 separate times in each account, but can do this a single time by &lt;a href="https://docs.aws.amazon.com/security-lake/latest/userguide/manage-regions.html#add-rollup-region"&gt;creating a rollup region.&lt;/a&gt; This means you can rollup all of your global organizational data into a single ingestion feed into your security data lake. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sjlw3s81dh01zuwtoht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sjlw3s81dh01zuwtoht.png" alt="Image description" width="512" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What is unique is that once the data is partitioned and stored in this format, it becomes easily queryable and re-usable for many data enrichment purposes. The Security Lake essentially makes it possible to centralize security data at scale both on a source level and infrastructure level––from your own cloud workloads and data sources, custom and on-prem resources, SaaS providers, as well as multiple regions and accounts.  &lt;/p&gt;

&lt;p&gt;As a strategic new service for AWS, when first launched, it already came supported with 50+ out-of-the-box integrations and services from many security vendors from Cisco to Palo Alto Networks, CrowdStrike and others, to help support its adoption and applicability to real engineering stacks.&lt;br&gt;
A DevSecOps application of the Security Data Lake &lt;/p&gt;

&lt;p&gt;In order to understand how you can truly harness the power of the AWS Security Lake, we’d like to walk through a short example that helps capture (really only the tip of the iceberg) of what this security lake makes possible.&lt;/p&gt;

&lt;p&gt;In this example, we’ll demonstrate how to use the Security Data Lake with one of the most popular security tools - &lt;a href="https://github.com/gitleaks/gitleaks"&gt;Gitleaks&lt;/a&gt;, for secret detection.  We will use Github Actions to add Gitleaks to our CI/CD to detect secrets.  &lt;/p&gt;

&lt;p&gt;Once our CI/CD runs it will send the data to our &lt;a href="https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html"&gt;Security Hub&lt;/a&gt; which is also auto-configured to send data to our security lake.  This is then stored in an S3 bucket, and the &lt;a href="https://aws.amazon.com/glue/"&gt;Glue ETL service&lt;/a&gt; is leveraged to transform the ingested data into the ASFF format for the OCSF schema.  A &lt;a href="https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html"&gt;Glue crawler&lt;/a&gt; monitors the S3 Bucket, and the data, once transformed, is sent to the Glue Catalog, which holds the database schema. This data is now queryable via Athena to extract important information, such as secrets detected in certain workloads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq9ccr1o6uglzlei7t5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq9ccr1o6uglzlei7t5r.png" alt="Image description" width="512" height="284"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Repo
&lt;/h3&gt;

&lt;p&gt;This repo consists of a simple Gitleaks example including secrets for detection to demo how it works and sends the data to Security Hub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm67i6lna3jdpalu3cyte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm67i6lna3jdpalu3cyte.png" alt="Image description" width="512" height="240"&gt;&lt;/a&gt;&lt;br&gt;
Link: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/security-lake-demo/gitleaks-to-security-hub/tree/main"&gt;https://github.com/security-lake-demo/gitleaks-to-security-hub/tree/main&lt;/a&gt; &lt;/p&gt;
&lt;h4&gt;
  
  
  Configuring Gitleaks
&lt;/h4&gt;

&lt;p&gt;Next, we configure Gitleaks to send the detected secrets to the AWS Security Hub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Gitleaks&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt;

&lt;span class="n"&gt;on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="n"&gt;push&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;branches&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;
&lt;span class="n"&gt;permissions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;   &lt;span class="c1"&gt;# This is required for requesting the JWT
&lt;/span&gt;  &lt;span class="n"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;read&lt;/span&gt; 
&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="n"&gt;gitleaks_scan&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;runs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ubuntu&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;latest&lt;/span&gt;
    &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="n"&gt;AWS_ACCESS_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AWS_ACCESS_KEY&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
      &lt;span class="n"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;      
    &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Checkout&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;
      &lt;span class="n"&gt;uses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;actions&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;checkout&lt;/span&gt;&lt;span class="nd"&gt;@v3&lt;/span&gt;

&lt;span class="c1"&gt;#     - name: configure aws credentials
#       uses: aws-actions/configure-aws-credentials@v2.0.0
#       with:
#         role-to-assume: arn:aws:iam::950579715744:role/security-lake-demo-github-action
#         role-session-name: GitHub_to_AWS_via_FederatedOIDC
#         aws-region: "us-east-1"
#       # Hello from AWS: WhoAmI
&lt;/span&gt;
&lt;span class="c1"&gt;#     - name: Sts GetCallerIdentity
#       run: |
#         aws sts get-caller-identity
&lt;/span&gt;
    &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Install&lt;/span&gt; &lt;span class="n"&gt;Gitleaks&lt;/span&gt;
      &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="n"&gt;wget&lt;/span&gt; &lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;gitleaks&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;gitleaks&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;releases&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;download&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;v8&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;17.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;gitleaks_8&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;17.0&lt;/span&gt;&lt;span class="n"&gt;_linux_x64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;gz&lt;/span&gt;
        &lt;span class="n"&gt;tar&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;xzvf&lt;/span&gt; &lt;span class="n"&gt;gitleaks_8&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;17.0&lt;/span&gt;&lt;span class="n"&gt;_linux_x64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;gz&lt;/span&gt;
        &lt;span class="n"&gt;chmod&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="n"&gt;gitleaks&lt;/span&gt;

    &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Run&lt;/span&gt; &lt;span class="n"&gt;Gitleaks&lt;/span&gt;
      &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;gitleaks&lt;/span&gt; &lt;span class="n"&gt;detect&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;redact&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;no&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nb"&gt;exit&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

    &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Upload&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;security&lt;/span&gt; &lt;span class="n"&gt;Hub&lt;/span&gt;
      &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="mf"&gt;1.27&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="n"&gt;python&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;upload_data_to_security_hub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;py&lt;/span&gt;



&lt;span class="n"&gt;Link&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;security&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;lake&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;demo&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;gitleaks&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;to&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;security&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;hub&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;workflows&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;gitleaks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yml&lt;/span&gt; 



&lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Security&lt;/span&gt; &lt;span class="n"&gt;Hub&lt;/span&gt; &lt;span class="n"&gt;Schema&lt;/span&gt;
&lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Security&lt;/span&gt; &lt;span class="n"&gt;Hub&lt;/span&gt; &lt;span class="n"&gt;schema&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="n"&gt;configurable&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;simple&lt;/span&gt; &lt;span class="n"&gt;Python&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="c1"&gt;# AWS Credentials
# Make sure you've set these up in your environment
&lt;/span&gt;&lt;span class="n"&gt;region_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;  &lt;span class="c1"&gt;# set your AWS region
&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;950579715744&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BaseModel&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;


&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AwsSecurityHubFinding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;SchemaVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;ProductArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;GeneratorId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;AwsAccountId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;Types&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;FirstObservedAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;LastObservedAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;CreatedAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;UpdatedAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;Severity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;
    &lt;span class="n"&gt;Title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;Resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;SourceUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;ProductFields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;UserDefinedFields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;Malware&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
    &lt;span class="n"&gt;Network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;ThreatIntelIndicators&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
    &lt;span class="n"&gt;RecordState&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;RelatedFindings&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
    &lt;span class="n"&gt;Note&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;reas_report&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;report.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;transform_gitleaks_output_to_security_hub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;SchemaVersion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2018-10-08&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;RuleID&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;File&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ProductArn&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;arn:aws:securityhub:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:product/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/default&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Types&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;RuleID&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GeneratorId&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gitleaks&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;AwsAccountId&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;account_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;CreatedAt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%dT%H:%M:%S.%f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[:&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;UpdatedAt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%dT%H:%M:%S.%f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[:&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Severity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;HIGH&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Title&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Fingerprint&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Description&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;RuleID&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Resources&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Type&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Other&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;File&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]}]&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;securityhub&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;securityhub&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                               &lt;span class="n"&gt;aws_access_key_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS_ACCESS_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                               &lt;span class="n"&gt;aws_secret_access_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                               &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Get the report
&lt;/span&gt;    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;transform_gitleaks_output_to_security_hub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;reas_report&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

    &lt;span class="c1"&gt;# Then use the AWS SDK
&lt;/span&gt;    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;securityhub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;batch_import_findings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="c1"&gt;# Findings=[finding.dict()]
&lt;/span&gt;        &lt;span class="n"&gt;Findings&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Link: &lt;a href="https://github.com/security-lake-demo/gitleaks-to-security-hub/blob/main/upload_data_to_security_hub.py"&gt;https://github.com/security-lake-demo/gitleaks-to-security-hub/blob/main/upload_data_to_security_hub.py&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Detected secrets in action:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapjiwl13lsd3mp35stdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapjiwl13lsd3mp35stdn.png" alt="Image description" width="512" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then navigate to Security Hub and see the findings there:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faiplmmxkygy1skeso6ni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faiplmmxkygy1skeso6ni.png" alt="Image description" width="512" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlqq2offfgilvgteqyks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlqq2offfgilvgteqyks.png" alt="Image description" width="512" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While useful for visualization and understanding that our configurations are working as expected, the queries available in the Security Hub are basic, and it’s not possible to enrich the data.  We want to be able to know if this secret, in the context of our own systems, is even interesting and needs to be prioritized for remediation.&lt;/p&gt;

&lt;p&gt;Let’s navigate to the Security Lake.&lt;/p&gt;

&lt;p&gt;In our Security Lake, it’s possible to see all of the configured sources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F385knw0t7vnv9algfun6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F385knw0t7vnv9algfun6.png" alt="Image description" width="512" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once in our Security Lake we can search for the Athena service, and find our data source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fankrku0yhv81sm22m6c2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fankrku0yhv81sm22m6c2.png" alt="Image description" width="512" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We locate our data source, where we can then see all of the tables we are able to query, where each data source has its own table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ixobeq4qgcpxq2f5x6e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ixobeq4qgcpxq2f5x6e.png" alt="Image description" width="512" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We then run our query to try and find high severity secrets in a specific region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmt8kl3wxa96s4nddmksv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmt8kl3wxa96s4nddmksv.png" alt="Image description" width="512" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we can see the resulting output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dil5ibzkp3kbdhy547l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dil5ibzkp3kbdhy547l.png" alt="Image description" width="512" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the data sources now available in a single queryable location - cloud workload data alongside DevSecOps toolchains, it's now possible to run complex queries––everything from IP reputation to severity. With all of the many findings our tooling today outputs and alerts about, it’s now possible to minimize the possibilities to relevant context, and prioritize remediation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Security Data Lake is Exciting
&lt;/h2&gt;

&lt;p&gt;The Security Data Lake is set to help with security data format heterogeneity.  By creating a single and unified standard, it becomes easier for developers to leverage, enrich and build upon this data––likewise to test and launch services.&lt;/p&gt;

&lt;p&gt;By providing a scalable solution for both the data sources and the global resource coverage, engineering organizations can apply data enrichment capabilities across services, tooling, and regions, providing greater context and correlation of security findings.  All of this together simplifies compliance monitoring &amp;amp; reporting, programmability, and automation that together provide more resilient and robust DevSecOps programs for engineering organizations.&lt;/p&gt;

</description>
      <category>devsecops</category>
      <category>security</category>
      <category>aws</category>
    </item>
    <item>
      <title>What is Minimum Viable Security (MVS) and how does it improve the life of developers?</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Tue, 05 Jul 2022 13:33:27 +0000</pubDate>
      <link>https://dev.to/jit/what-is-minimum-viable-security-mvs-and-how-does-it-improve-the-life-of-developers-5cf6</link>
      <guid>https://dev.to/jit/what-is-minimum-viable-security-mvs-and-how-does-it-improve-the-life-of-developers-5cf6</guid>
      <description>&lt;p&gt;Last year, Google shook up the cybersecurity and software development community by launching the &lt;a href="https://mvsp.dev/"&gt;Minimum Viable Security Product&lt;/a&gt; (MSVP). &lt;/p&gt;

&lt;p&gt;Developed in collaboration with Salesforce, Slack, Okta and others, MSVP's goal is to create baseline security standardization for third-party software developers, ensuring companies in the supply chain can rely on a minimum level of security practices and standards when building their products.&lt;/p&gt;

&lt;p&gt;For fast-paced startups building software products in the B2B or B2C or even the B2D industry (like ourselves at &lt;a href="https://www.jit.io/"&gt;Jit&lt;/a&gt;), MSVP is great, but it represents a significant development that needs consideration. &lt;/p&gt;

&lt;p&gt;What even is the concept of Minimum Viable Security EXACTLY? And how can developers successfully  learn how to follow and comply with these new high level requirements?&lt;/p&gt;

&lt;p&gt;It’s no news that with the need to deliver software products quickly and continuously, the tech world has seen a shift in operations towards DevOps, DevSecOps and ‘Shift Left Everything’ approaches. These practices were created to support short, iterative and continuous cycles, and also to avoid running quality and security tests as an afterthought and delay the release.   &lt;/p&gt;

&lt;p&gt;But the reality isn’t running as smoothly as the theory. &lt;/p&gt;

&lt;p&gt;Due to multiple issues, many professionals in the industry are starting to think about taking Shift Left practices a step further through 'Born Left.' This happens today mainly in software testing that is entirely owned by the engineering team as a native function, rather than by a siloed QA or Ops teams. &lt;/p&gt;

&lt;p&gt;“Born Left” means that the engineering organization takes full ownership of the testing as part of the processes, known as Continuous Integration (CI), and operations through Continuous Deployment (CD). &lt;/p&gt;

&lt;p&gt;But what about security? &lt;/p&gt;

&lt;p&gt;The natural progression of this strategy puts security next in line, with Continuous Security (CS) becoming an emerging standard.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with shift-left security (or anything else)
&lt;/h2&gt;

&lt;p&gt;The problem with making developers responsible for more and more areas of the software cycle is the potential for overwhelming the team due to added tasks out of their domain expertise, frustrating them and causing them to be delayed with their main coding tasks. Quality, operations, security – the requirements quickly add up, and these domains often require expert knowledge, and this is particularly true when it comes to security. The cybersecurity landscape is constantly evolving with a range of ever evolving new threats to consider and the proliferation of new shift-left security tools designed to combat them.&lt;/p&gt;

&lt;p&gt;Herein lies the problem: how can software based companies achieve &lt;a href="https://www.jit.io/blog/is-balancing-dev-owned-security-and-velocity-possible"&gt;dev-native security&lt;/a&gt; while maintaining development velocity? &lt;/p&gt;

&lt;p&gt;That's where the  Minimum Viable Security (MVS) approach comes into play.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Minimum Viable Security and how does it relate to software development?‍
&lt;/h2&gt;

&lt;p&gt;We are all familiar with, and many of us follow the concept Minimum Viable Product (MVP)—a product is initially built with the minimum set of features needed to test market fit and ensure the business strategy without first expending all the resources. Then, the product continues to be optimized with an MVP mindset, adding minimum viable new features or capabilities every cycle. In fact, many of the most popular software products brought by brands we all know and respect are built that way. &lt;/p&gt;

&lt;p&gt;During the software development, this is done iteratively, focusing on delivering a minimum baseline value with every single version.&lt;/p&gt;

&lt;p&gt;The MVP approach to the product is analogous to the MVS approach to security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72gsnmdqqecn6xzu1k27.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72gsnmdqqecn6xzu1k27.jpeg" alt="The MVP concept, each release is a standalone viable one.&amp;lt;br&amp;gt;
" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The MVP concept, each release is a standalone viable one.&lt;br&gt;
‍&lt;br&gt;
For developers to be willing to take over security responsibilities and fully own them, the process must work like any other aspect they are familiar with: starting small/lean, improving in a continuous and agile manner, automating as much as possible along the way, and running security 'as code.'&lt;/p&gt;

&lt;p&gt;Let’s take this a step further. &lt;/p&gt;

&lt;h2&gt;
  
  
  Minimum Viable Security in Details ‍
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Starting with a Minimum Viable Plan
&lt;/h3&gt;

&lt;p&gt;While different security checklists that can be found all over the web are available to engineering leaders, such as  Google's newly developed MVSP mentioned above, they are hardly helpful if you want to come up with a minimum security plan that is operational. &lt;/p&gt;

&lt;p&gt;It is crucial to always keep in mind the distinction between a high level security checklist and a product-tailored actionable plan.&lt;/p&gt;

&lt;p&gt;A Minimum Viable Security Plan is not a checklist, but a detailed, actionable, step by step plan, that includes all of the processes and needed tools, but most importantly, defines the minimum amount of steps developers should take to make a product secure enough for a specific purpose - just in time. &lt;/p&gt;

&lt;p&gt;For instance, consider a security baseline that is based on a checklist and it codifies the most up-to-date knowledge and strategies designed to deal with specific threats to a company's tech stack. It needs to be as simple as possible, still to cover the entire product boundary, be continuously updated, and follow GitOps principles (with customizable code).&lt;/p&gt;

&lt;p&gt;You can’t expect developers to master such a task without properly equipping them. &lt;/p&gt;

&lt;p&gt;The first obstacle developers face is knowledge. They need to be in the know of the security threat landscape, in addition to the relevant tooling (and there are many of them). They must keep updated, codify the plan and keep the codified plans evergreen. &lt;/p&gt;

&lt;p&gt;Unlike a checklist, an MVS plan should easily codify this knowledge and create the initial capability to continuously and automatically update the product’s security. &lt;/p&gt;

&lt;p&gt;As mentioned above, the plan must also constantly evolve and include additional plans at each stage to support the constantly maturing product.  A serverless plan, for instance, isn’t a SOC2 compliance plan, and isn’t an &lt;a href="https://owasp.org/www-project-top-ten/"&gt;OWASP Top 10&lt;/a&gt; plan and so forth. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rlqv7snjzt32hk4kxv7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rlqv7snjzt32hk4kxv7.png" alt="Jit.io- Minimum Viable Security Plans" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A codified plan is a necessity for developers that are lacking security domain expertise&lt;/p&gt;

&lt;p&gt;Images below taken from the &lt;a href="https://jit.io/"&gt;Jit platform&lt;/a&gt;: a couple of different MVS (minimum viable security) plans available to automatically activate:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3an2kmkjms1604zwbr9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3an2kmkjms1604zwbr9y.png" alt="Jit.io-Security actions within a codified plan&amp;lt;br&amp;gt;
" width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Born left MVS? Security as Code
&lt;/h2&gt;

&lt;p&gt;Identifying and selecting the optimal tools (open source or not) that are required as part of  implementing the plan is a resource   intensive, tedious task that takes a lot of time and effort. Integrating OSS tools into relevant stacks, testing them, and plugging them in to run automatically via CI/CD in a security-as-code format is another heavy task that is also a key part of the born left, dev-owned security mindset.&lt;/p&gt;

&lt;p&gt;On top of that, if you wish to be effective, tool selection and integration must be continuously updated due to the nature of cyber threats and security vulnerabilities. That means it should therefore be fully automated, and properly orchestrated, both as part of the development environment and as part of the pipeline - following the concept of MVS as code.&lt;/p&gt;

&lt;p&gt;If you expect developers to initiate the above on their own, a common problem is overstretching an already busy team. &lt;/p&gt;

&lt;p&gt;Adding new responsibilities in fields where developers aren’t experts take a toll, resulting in 'Shift Left fatigue’ (as seen in many discussions). That’s making the case for a Born-Left approach even more compelling, given that the born-left approach is an alternative, offering the relevant tooling to actually do the heavy lifting including the automation and orchestration. &lt;/p&gt;

&lt;p&gt;To summarize, automating and implementing product security plans as code and following GitOps principles in familiar development environments significantly reduces the Shift Left burden. Image below: The inventory of security actions that are included in specific MVS plans; some are shared across plans:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff52fa8b8edqkcezsv1yb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff52fa8b8edqkcezsv1yb.png" alt="Screen: Jit.io MVS Github experience&amp;lt;br&amp;gt;
" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintaining Velocity and Avoiding Developer Burnout
&lt;/h2&gt;

&lt;p&gt;To meet MVSP requirements while maintaining development velocity and not burning out your developers, adopting an MVS mindset and taking an automated approach to product development is essential.&lt;/p&gt;

&lt;p&gt;This includes automation of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuously updated and constantly evolving MVS plans&lt;/li&gt;
&lt;li&gt;MVS plans-as-code, with security tests generated by multiple tools. &lt;/li&gt;
&lt;li&gt;Integration and orchestration of multiple security controls, in a unified and consolidated interface, as part of the dev environment and pipelines. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many things to consider when it comes to MVS requirements. The tech industry has already united to formalize some  guiding principles and define standardization practices that match the evolving threat landscape, the next step is the implementation as code. &lt;/p&gt;

&lt;p&gt;Feel free to get started here &amp;gt;&amp;gt;  &lt;a href="https://jit.io"&gt;www.jit.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>appsec</category>
      <category>devsecops</category>
      <category>mvs</category>
    </item>
    <item>
      <title>Bootstrapping a Secure AWS as-Code Environment - Your MVS Checklist</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Tue, 22 Mar 2022 17:32:58 +0000</pubDate>
      <link>https://dev.to/jit/bootstrapping-a-secure-aws-as-code-environment-your-mvs-checklist-5bp2</link>
      <guid>https://dev.to/jit/bootstrapping-a-secure-aws-as-code-environment-your-mvs-checklist-5bp2</guid>
      <description>&lt;p&gt;Infrastructure as Code (IaC) has changed the way we manage our cloud operations, by making it infinitely easier and quicker to roll out infrastructure on demand––with a single config file.&lt;/p&gt;

&lt;p&gt;In this article, we’ll delve into both the benefits and security challenges introduced to the underlying stack that comes with adopting an AWS anything-as-code model. We’ll also introduce the &lt;a href="https://www.jit.io/blog/born-left-vs-shift-left-security-and-your-1st-security-developer-architect"&gt;minimum viable security&lt;/a&gt; (&lt;a href="https://www.jit.io/blog/born-left-vs-shift-left-security-and-your-1st-security-developer-architect"&gt;MVS&lt;/a&gt;) approach that delivers baseline security controls for any stack. &lt;/p&gt;

&lt;h2&gt;
  
  
  Embracing an Everything-as-Code Model
&lt;/h2&gt;

&lt;p&gt;In line with the principles of IaC, organizations are increasingly adopting as-code frameworks for different components of a tech stack, including security, policy, compliance, configuration, and operations. AWS supports various as-code frameworks, including their own CloudFormation, Terraform, Pulumi, and have even recently rolled out their next-gen IaC in the form of AWS CDK. By providing an API to provision and manage resources, it’s now possible to spin up a complex cloud architecture by defining simple, code-based templates. &lt;/p&gt;

&lt;p&gt;With environment-as-code pipelines, organizations can then manage and extend their deployment environment across multiple regions and accounts through a single workflow, leveraging the same code. &lt;/p&gt;

&lt;h2&gt;
  
  
  Adopting Minimum Viable Security for Baseline Security Controls
&lt;/h2&gt;

&lt;p&gt;While IaC frameworks come with the benefits of automation across the entire stack for more  rapid delivery and tighter controls, securing each environment comes with its own unique set of challenges. On top of this, with all of the noise and panic being constantly generated around security and exploits, it is hard for organizations that are just starting up to understand the minimum critical controls, and what should be out of scope. The end result being that emerging companies striving to launch the first version of their product, have little understanding of the baseline security they actually need to implement to get ramped up. &lt;/p&gt;

&lt;p&gt;To solve this, the MVS approach offers a vendor-neutral security baseline that reduces the complexity and overhead when deploying infrastructure, and specifically cloud (native) environments. Similar to Agile methods, MVS focuses on a minimal shortlist of critical security controls that adds some initial security to the launched product to tackle the most common threats. This approach helps organizations establish a sufficient security posture while integrating seamlessly into existing automation tooling and pipelines used for configuring today’s complex cloud-based environments.&lt;/p&gt;

&lt;p&gt;In order to demonstrate this in practice, we’ll show how this actually applies and works when securing AWS instances (as the most popular and most widely adopted cloud), through the automated MVS approach. &lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for How to Secure AWS Environments as Code
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://jit.io"&gt;Jit&lt;/a&gt;, we have identified a few layers upon which we focus our security controls for AWS environments that provide the baseline security required that can be expressed as code to automate the bootstrapping of your AWS environments without compromising velocity. &lt;/p&gt;

&lt;p&gt;These include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Account Structure &lt;/li&gt;
&lt;li&gt;Identity and Access Management &lt;/li&gt;
&lt;li&gt;User Creation and Secret Management&lt;/li&gt;
&lt;li&gt;Hierarchies, Governance, and Policies&lt;/li&gt;
&lt;li&gt;Access Controls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below we’ll dive into each individually and how we can automate these eventually within your existing IaC and automated pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build a Secure AWS Account Structure
&lt;/h2&gt;

&lt;p&gt;Many of the practices we will list below are tried and true, and applied at Jit for our own security controls.  First off, we  split our AWS accounts into three primary organizational units (OUs): users, a sandbox, and workloads. &lt;/p&gt;

&lt;p&gt;The users OU let us host a dedicated account to set up all users. &lt;/p&gt;

&lt;p&gt;The sandbox unit is for developing or testing new code changes. This OU can also host accounts for experimenting with as-code templates and CI/CD pipelines. &lt;/p&gt;

&lt;p&gt;Workload units include staging/production environments and contain various accounts that run external-facing services.&lt;br&gt;
As cloud workloads grow, DevOps teams are inclined to set up multiple accounts for rapid innovation and flexible controls. The use of multiple AWS accounts helps DevOps teams achieve isolation and independence by providing natural boundaries for billing, security, and access to resources. &lt;/p&gt;

&lt;p&gt;While building AWS accounts, the below practices are recommended: &lt;/p&gt;

&lt;p&gt;Use organizational units (OUs) to group accounts into logical and hierarchical structures. Accounts should be organized based on similar and related functions instead of an organization’s reporting hierarchy. Although AWS supports OU depth of up to five levels, it is best to maintain the lowest possible structure depth to avoid complexity. &lt;br&gt;
Maintain a master account created for managing all organizational units and related billing for cost control and ease of maintenance. &lt;/p&gt;

&lt;p&gt;Assign limited cloud resources, data, or workloads to an organization’s management or master account for maximum security since the organization’s service control policies (SCPs) do not apply to the management account. &lt;br&gt;
Isolate production and non-production workload environments from each other. AWS workloads are typically contained in accounts, where each account can have more than one workload. Production accounts should have either one or a few closely related workloads. By separating workload environments, administrators can secure production from unauthorized access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Follow Identity and Access Management Practices
&lt;/h2&gt;

&lt;p&gt;For managing access to and permissions for AWS resources, Identity and Access Management (IAM) offers a first line of defense by streamlining the creation of users, roles, and groups. When provisioning an AWS environment through automation, organizations should leverage existing modules to manage IAM users, roles, and permissions.&lt;/p&gt;

&lt;p&gt;Administering robust security through IAM typically relies on a set of common practices that we also apply internally at Jit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure IAM policies for a user, group, or role grant only the permissions needed to accomplish a given task––this approach is also dubbed “least privilege” and there is plenty of excellent material about it. Permissions should initially only contain the least number of privileges required; these can later be increased if necessary. &lt;/li&gt;
&lt;li&gt;Create separate roles for different tasks for each IAM user. &lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html"&gt;session tokens&lt;/a&gt; as temporary credentials for authorization. You should additionally configure a session token to have a short lifetime to prevent misuse in the event of a compromise. &lt;/li&gt;
&lt;li&gt;Do not use a root user’s access key to perform regular activities or any programmatic task, as the root access key grants full access to all AWS services for any resource. You should also rotate the access key of the root user regularly to prevent misuse. &lt;/li&gt;
&lt;li&gt;If account users are allowed to select their own password, make sure there is a strong baseline password policy and the requirement to periodically change it. &lt;/li&gt;
&lt;li&gt;Implement  multi-factor authentication (MFA) for additional security. MFA adds an additional layer of authentication on top of the user credentials and will continue to secure a resource in the event that credentials are compromised.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automate User Creation with Encrypted Secrets
&lt;/h2&gt;

&lt;p&gt;To cut down the risks associated with manual efforts, it is strongly recommended that organizations embrace automation for user creation. This ensures that all stages of the process flow, including account creation, configuration, and assignment to an OU, require minimal manual intervention. &lt;br&gt;
Automation also assists with streamlining user experience by integrating with onboarding and offboarding user workflows. The mechanism provides a fine balance between agility and control by permitting automated configuration and validation of IAM policies across multiple environments (dev, staging, or production).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28b1r0ykco2wqalirqil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28b1r0ykco2wqalirqil.png" alt="AWS IAM Roles" width="512" height="311"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1: A typical user creation process flow (Source: Amazon)&lt;/p&gt;

&lt;p&gt;Apart from user creation, you should also automate identity federation and secret provisioning to ensure a comprehensive user creation cycle. A typical workflow resembles the above process flow, along with leveraging tools such as &lt;a href="https://keybase.io/"&gt;Keybase&lt;/a&gt; for the automatic encryption of credentials and keypairs, supported by IaC frameworks like Terraform. ]&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Hierarchical Structure &amp;amp; Policies with AWS Organizations
&lt;/h2&gt;

&lt;p&gt;AWS Organizations helps you implement granular controls to structure accounts in a manageable way. The service offers enhanced flexibility and hierarchical structure to AWS resources based on organizational units (OUs). For any AWS organization, it is recommended to start with a basic OU structure with core OUs such as infrastructure and security. &lt;br&gt;
You should also create a policy inheritance framework that allows maximum access to OUs at the foundation level and then gradually limits access with each layer of the OU. This layering of policies can further continue to the account and instance levels. &lt;/p&gt;

&lt;p&gt;Organizations should also apply service control policies (SCPs) on the OU rather than individual accounts. SCPs offer a multi-layered approach to access management, as they offer a redundant security check that takes precedence over IAM policies. &lt;/p&gt;

&lt;p&gt;As a best practice, it is recommended to use trusted access for authorizing services across your organization. This mechanism helps to grant permissions to only designated services without affecting the overall permissions of users or roles. As workloads grow, you can include other organizational units based on common themes, such as: policy staging, suspended accounts, individual users, deployments, and transitional accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secure Remote Access to the AWS Console
&lt;/h2&gt;

&lt;p&gt;Securing remote access to the AWS console is one of the easiest yet crucial parts of maintaining security in an AWS as-code environment. A minimal approach here can be achieved by leveraging the AWS Management Console and AWS Directory Service to enforce IAM policies on account switching. Once logged in, based on the user’s role (read-only or read-write access), this approach allows individual users to switch accounts from within the console. &lt;/p&gt;

&lt;p&gt;Additionally, you can also enforce MFA through a trust policy between the user’s account and the target account to ensure only users with MFA enabled can access the target account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enforce Secure Access of AWS APIs
&lt;/h2&gt;

&lt;p&gt;Since the majority of API endpoints are public-facing, it is extremely crucial to secure them. It is always recommended to limit unauthenticated API routes by enforcing a robust authentication and authorization mechanism for accessing the APIs. Apart from leveraging various AWS built-in mechanisms to safeguard both public and private API endpoints, you should also adopt minimal security controls such as enabling MFA to use the AWS CLI or using AWS Vault to secure keypairs.&lt;/p&gt;

&lt;p&gt;Apart from this, there are several approaches to achieve controlled access to APIs. These include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM-based role and policy permissions&lt;/li&gt;
&lt;li&gt;Lambda authorizers&lt;/li&gt;
&lt;li&gt;Client-side SSL certificates&lt;/li&gt;
&lt;li&gt;Robust web application firewall (WAF) rules&lt;/li&gt;
&lt;li&gt;Throttling targets&lt;/li&gt;
&lt;li&gt;JWT authorizers&lt;/li&gt;
&lt;li&gt;Creating resource-based policies to allow access from specific IPs or VPCs&lt;/li&gt;
&lt;li&gt;API keys&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Security - the TL;DR
&lt;/h2&gt;

&lt;p&gt;The as-code model for various computing components allows you to automatically, consistently, and predictably spin up deployment environments using manifest files. While the everything-as-code approach simplifies the deployment and management of resources on AWS, security can’t be ignored as part of this process and should also benefit from the guardrails automation can provide. &lt;/p&gt;

&lt;p&gt;This article delved into the MVS approach and how it can be applied as code. In the next article of this series, we will give concrete code examples of how to bootstrap a secure AWS environment using Terraform in practice.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>5 Open Source Security Tools All Developers Should Know About</title>
      <dc:creator>David Melamed</dc:creator>
      <pubDate>Wed, 26 Jan 2022 08:33:27 +0000</pubDate>
      <link>https://dev.to/jit/5-open-source-security-tools-all-developers-should-know-about-4bhe</link>
      <guid>https://dev.to/jit/5-open-source-security-tools-all-developers-should-know-about-4bhe</guid>
      <description>&lt;p&gt;With product security becoming an increasingly important aspect of software development, “shift left” is gaining wide acceptance as a best practice to ensure security is baked into development early. More and more traditional (read: incumbent) security companies are releasing shift-left products and capabilities, and the practice is becoming almost de facto for engineering teams. &lt;/p&gt;

&lt;p&gt;However, the industry has begun to  realize that simply “shifting left” is hardly enough for a continuous delivery world. High velocity, progressive development teams are embracing a new and trending “born left” security approach, where security aspects - like more and more product-related aspects - are addressed starting from the first line of code.  This means product security isn’t just delivered by the developer team, but is rather owned by them. &lt;/p&gt;

&lt;p&gt;Understanding this shift, comes with the realization that already burdened developers are faced with additional responsibilities that are continuously dropped in their lap.  This has led the industry to hunt for ways to provide solutions and tools that help developers manage this growing workload, including security, and maintain velocity. &lt;/p&gt;

&lt;p&gt;We acknowledge that currently the open source “shift left” tooling doesn’t solve the overhead put on the developers (due to the noise these create, and the burden of learning security in general and the ropes of each open source tool). That is on our shoulders to solve. &lt;/p&gt;

&lt;p&gt;But still, not all open source tools are created equal, and there are quite a few open source security tools that are not-only developer-friendly, but provide much-needed security controls early in the development cycle. That’s why we’ve compiled  a list of 5 security tools that we believe all developers should know about, and consider adopting if they do not have such a control currently in place. This post will provide a quick overview of  what makes a tool developer-friendly. We will then introduce a tool per security domain and its coverage, do a quick dive into how they work, and why you should consider adopting it into your toolchain.&lt;/p&gt;

&lt;h1&gt;
  
  
  What makes a tool developer-friendly?
&lt;/h1&gt;

&lt;p&gt;Let’s start by defining what makes tools ‘dev-friendly’ in the first place. &lt;/p&gt;

&lt;p&gt;To me, a dev-friendly tool sets out to make developers (and dev leaders) lives easier by either simplifying tasks or speeding up processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open source
&lt;/h2&gt;

&lt;p&gt;One of the greatest benefits of open source tools is that they are free to use (of course check the license first!), so there is no need for budget approval, and you can try a tool out locally (you should probably make sure to verify before using it on any company resources - more on that later), without having to commit to it. Instead of lengthy selection processes, you can simply try it out and see how you like it. In addition, and this  is particularly critical for security tools, as the name implies, they provide you access to the entire codebase, so that you don't have any surprises regarding what actions the tool is performing when running it in your environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Runs locally first...
&lt;/h2&gt;

&lt;p&gt;Running code locally from your terminal allows software developers to launch and test code with one simple command. The ability to run a tool locally ensures that you can get immediate feedback and easily tweak the configuration. When launched from a container, you don't even have to bother with possible environment issues related to compilation.&lt;/p&gt;

&lt;h2&gt;
  
  
  ... and then in the CI/CD pipeline
&lt;/h2&gt;

&lt;p&gt;Tools that can be integrated into the CI/CD pipeline have higher value. Once I have used a tool locally and found it to be useful, then my next move would likely be to run it continuously as part of my development lifecycle - and not only on my local machine, consuming my local resources. Of course, once a tool and process is part of the pipeline, the benefits are also extended across the entire dev team and codebase. So, starting locally is an advantage, but then being able to easily integrate the new tool into existing environments and processes is an advantage as well. &lt;br&gt;
‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Part of the developer work environment
&lt;/h2&gt;

&lt;p&gt;Developers should not be wasting time switching between development tools and security tools. All the tools on this list either run in the CI/CD pipeline (e.g. Github Actions) or as a plugin into the IDE. Context switching has been proven to adversely affect flow and productivity.  The less context switching, the greater the development velocity, and happiness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Great Documentation
&lt;/h2&gt;

&lt;p&gt;I believe this requires little explanation...if a user doesn’t know how to use your tool practically then, you’ve gained very little by releasing it.&lt;/p&gt;

&lt;p&gt;Readily available documentation made for dev professionals can make or break a smooth user experience. With great “how-to” documentation, ramp-up time is much shorter.&lt;/p&gt;

&lt;p&gt;The better the documentation, the smoother the learning curve, and the easier to  troubleshoot, making the tool significantly easier to adopt.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Configurable output formats
&lt;/h2&gt;

&lt;p&gt;If you can receive a tool’s output in multiple formats, it then becomes possible to ingest this output by another tool through an API or other form of integration, allowing you to manipulate and analyze results in other tools. If results are only readable by humans, what you can then do with those results is limited and requires human effort - i.e. time that you simply don’t have. &lt;/p&gt;

&lt;p&gt;So without further ado... &lt;/p&gt;

&lt;h1&gt;
  
  
  5 Open Source Security Tools We Love - And You Should Too
&lt;/h1&gt;

&lt;p&gt;Based on the 5 criteria above, I’ve collected five security tools that are dev friendly, and I’ve enjoyed using them as a security engineer.&lt;/p&gt;

&lt;p&gt;The list aims to cover various domains of code analysis tools that should be a part of  minimal requirements for security applied to development processes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Static application security testing (SAST)&lt;/li&gt;
&lt;li&gt;Dynamic application security testing (DAST)&lt;/li&gt;
&lt;li&gt;Hard-coded Secrets detection&lt;/li&gt;
&lt;li&gt;Infrastructure as Code analysis (IaC)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pycharm Python Security Scanner
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://pycharm-security.readthedocs.io/en/latest/"&gt;Pycharm Python Security Scanner&lt;/a&gt; is a security scanner for Python code wrapped as a Pycharm plugin, checking for vulnerabilities while also suggesting fixes. Alongside acting as a comprehensive security scanner, it also offers some additional extensions that can run dependency check analyses as well, which are quite useful.&lt;/p&gt;

&lt;p&gt;What makes it unique is that beyond being a plugin, it is also available as a CI/CD workflow for GitHub Actions in the &lt;a href="https://plugins.jetbrains.com/plugin/13609-python-security"&gt;Github Marketplace&lt;/a&gt;.&lt;br&gt;
‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Semgrep
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://semgrep.dev/"&gt;Semgrep&lt;/a&gt; is a highly-configurable SAST tool that looks for recurring patterns in the syntax tree. It can either run locally using Docker or be integrated into the CI/CD pipeline with Github Actions.&lt;/p&gt;

&lt;p&gt;Results are delivered as JSON files, allowing you to pipe the results into other tools, like jq in order to manipulate them.&lt;br&gt;
‍&lt;/p&gt;

&lt;h3&gt;
  
  
  gitLeaks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/zricethezav/gitleaks"&gt;Gitleaks&lt;/a&gt; is a great project used to quickly detect hard-coded secrets based on a configuration file containing hundreds of built-in regex expressions tailored to find API keys of popular SaaS. It can run locally using Docker and or be integrated into the CI/CD pipeline with Github Actions. Results are delivered in various formats. &lt;/p&gt;

&lt;p&gt;The rules can be easily extended to match your internal patterns and homegrown tools as well.‍&lt;/p&gt;

&lt;h3&gt;
  
  
  ZAP
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://owasp.org/"&gt;OWASP&lt;/a&gt;'s &lt;a href="https://www.zaproxy.org/blog/2020-05-15-dynamic-application-security-testing-with-zap-and-github-actions/"&gt;Zed Attack Proxy (ZAP)&lt;/a&gt; is another open source tool, used for dynamic scanning (DAST) built by the OWASP team (the same folks who gave us the Top 10 Security Vulnerabilities). It can run locally using Docker and provides a Github workflow to run in the CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;The common output for this tool is an HTML report, but you can also receive the output in JSON with an addon.&lt;/p&gt;

&lt;h3&gt;
  
  
  KICS
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://checkmarx.com/product/opensource/kics-open-source-infrastructure-as-code-project/"&gt;KICS&lt;/a&gt; is used to perform code static analysis of infrastructure, and includes about 1,400 rules supporting various platforms like Terraform, CloudFormation, Ansible or Helm Charts. It can run locally using Docker and can be integrated into the CI/CD pipeline with Github Actions. &lt;/p&gt;

&lt;h2&gt;
  
  
  High Velocity Development and Security
&lt;/h2&gt;

&lt;p&gt;Development teams are being tasked with end-to-end responsibility and ownership of their products - whether it’s production readiness, performance or security, while all along there’s the pressure to ship code to production with high velocity. &lt;/p&gt;

&lt;p&gt;This growing challenge is what set us out on a mission at &lt;a href="https://www.jit.io/"&gt;Jit&lt;/a&gt;, to ease this growing burden on developers making the ownership of product security much simpler - from the planning, through open source orchestration and more, based on an &lt;a href="https://www.jit.io/post/born-left-vs-shift-left-security-and-your-1st-security-developer-architect"&gt;MVS approach&lt;/a&gt; (Minimum Viable Security). Basically this manifesto says start small, and constantly iterate, you don’t need to build a fortress on Day 1, but have baseline security controls, and grow from there.&lt;/p&gt;

&lt;p&gt;As I mentioned above, while dev-friendly security tools offer great benefits, the growing responsibility assigned to developers requires a shift in today’s approach - one that requires a minimum viable mindset and automated orchestration, so that devs will be able to own product security without compromising velocity.&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>devops</category>
      <category>security</category>
      <category>devsecops</category>
    </item>
  </channel>
</rss>
