<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Tobin</title>
    <description>The latest articles on DEV Community by Daniel Tobin (@dant24).</description>
    <link>https://dev.to/dant24</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dant24"/>
    <language>en</language>
    <item>
      <title>Using Container-Native Load Balancing for High Performance Networking in Kubernetes</title>
      <dc:creator>Daniel Tobin</dc:creator>
      <pubDate>Mon, 03 Aug 2020 20:18:06 +0000</pubDate>
      <link>https://dev.to/cyral/using-container-native-load-balancing-for-high-performance-networking-in-kubernetes-50om</link>
      <guid>https://dev.to/cyral/using-container-native-load-balancing-for-high-performance-networking-in-kubernetes-50om</guid>
      <description>&lt;p&gt;By Achintya Sharma&lt;/p&gt;

&lt;p&gt;At Cyral, one of our many supported deployment mediums is Kubernetes. We use helm to deploy our sidecars on Kubernetes. To ensure high availability we usually have multiple replicas of our sidecar running as a ReplicaSet and the traffic to the sidecar’s replicas is distributed using a load-balancer. As a deliberate design choice we do not specify any specific load balancer leaving it as a choice to the user. &lt;/p&gt;

&lt;h2&gt;
  
  
  Classic Load Balancing
&lt;/h2&gt;

&lt;p&gt;In a distributed system, load balancers could be placed wherever traffic distribution is needed. An n-tiered stack might end up with n load balancers&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe1ju9cfnxw9yvw739q7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe1ju9cfnxw9yvw739q7b.png" alt="Classic Load Balancing" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using load balancers as illustrated above has proven benefits; the above architecture is a popular choice for modern distributed architectures. Traditionally load balancing would assume machines as the targets for load balancing. Traffic originating from outside would be distributed amongst a dynamic pool of machines. However, container orchestration tools like Kubernetes do not provide a one to one mapping between machines and the pods. There may be more than one pod on a machine or the pod which is available to serve traffic may reside on a different machine. Standard load balancers still route the traffic to machine instances where iptables are used to route traffic to individual pods running on these machines. This introduces at least one additional network hop thereby introducing latency in the packet’s journey from load balancer to the pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  Routing Traffic Directly to Pods
&lt;/h2&gt;

&lt;p&gt;Google introduced Cloud Native Load Balancing at its’ Next 18 event and made it generally available earlier this year. The key concept introduced is a new data model called Network Endpoint Group (NEG) which can utilize different targets for routing traffic instead of routing traffic only to machines. One of the possible targets is the pod handling the traffic for service. So, instead of routing to the machine and then relying on iptables to route to the pod as illustrated above; with NEGs the traffic goes straight to the pod. This leads to decreased latency and an increase in throughput when compared to traffic routed with vanilla load balancers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flpxlm0gvzak153yyypdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flpxlm0gvzak153yyypdn.png" alt="iptables network endpoint group" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To reduce the hops to a minimum, we utilized Google Cloud Platform’s (GCP) internal load balancer and configured it with NEGs to route database traffic directly to our pods servicing the traffic. In our tests, the above combination led to a significant gain in performance of our sidecar for both encrypted and unencrypted traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real World Example of NEG
&lt;/h2&gt;

&lt;p&gt;As mentioned above, we use helm to deploy on Kubernetes. We use annotations passed via values files to helm to configure our charts for cloud specific deployments. However, for this post we’ll be using plain kubernetes configuration files since they provide a clearer view of kubernetes concepts. The following kubernetes configuration is an example of running an nginx deployment with 5 replicas.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Service
apiVersion: v1
kind: Service
metadata:
  name: nginx-internal-example
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
    cloud.google.com/neg: '{
      "exposed_ports":{
        "80":{}
      }
    }'  
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http-port
  selector:
    run: neg-routing-enabled  
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:      
      run: neg-routing-enabled
  replicas: 5 # tells deployment to run 5 pods matching the template
  template:
    metadata:
      labels:
        run: neg-routing-enabled
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown above, port 80 is mapped as NEG endpoints and corresponding NEG endpoints are created.&lt;/p&gt;

&lt;p&gt;In the example above the annotation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cloud.google.com/load-balancer-type: "Internal"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;configures the load-balancer to be an internal load balancer on GCP. The mapping of ports to Network Endpoint Groups is done with the following annotation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cloud.google.com/neg: '{
      "exposed_ports":{
        "80":{}   
      }
    }'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Notes on running the example on GCP:
&lt;/h2&gt;

&lt;p&gt;Running an NEG deployment as an internal load balancer on GCP requires explicit firewall rules for health and reachability checks. The documentation describing this process is &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#attaching-int-https-lb" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the yaml for the running service as an internal load balancer. To get this using kubectl run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get service nginx-internal-example -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/load-balancer-type: Internal
    cloud.google.com/neg: '{ "exposed_ports":{ "80":{} } }'
    cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-53567703-default-nginx-internal-example-80-8869d138"},"zones":["us-central1-c"]}'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/load-balancer-type":"Internal","cloud.google.com/neg":"{ \"exposed_ports\":{ \"80\":{} } }"},"name":"nginx-internal-example","namespace":"default"},"spec":{"ports":[{"name":"http-port","port":80,"protocol":"TCP","targetPort":80}],"selector":{"run":"neg-routing-enabled"},"type":"LoadBalancer"}}
  creationTimestamp: "2020-08-05T22:52:31Z"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  name: nginx-internal-example
  namespace: default
  resourceVersion: "388391"
  selfLink: /api/v1/namespaces/default/services/nginx-internal-example
  uid: d6e52c50-bf46-4dc3-99a0-7746065b8e6f
spec:
  clusterIP: 10.0.11.168
  externalTrafficPolicy: Cluster
  ports:
  - name: http-port
    nodePort: 32431
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: neg-routing-enabled
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 10.128.0.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Challenges and Limitations
&lt;/h2&gt;

&lt;p&gt;NEG is a relatively new concept hence the tooling around it is still evolving. We had to work around some quirks to get NEG working correctly for helm deployments. For example, if you have multiple ports on your container and you’d like the user to be able to configure them at install time by providing a list of ports then getting the annotation for NEG right can be tricky because of the expected format of NEG annotation. We use helm’s built in first and rest method for splitting the list of ports passed to match the expected format. The following is an example from our helm charts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;annotations:
    cloud.google.com/load-balancer-type: "Internal"
    cloud.google.com/neg: '{
      "exposed_ports":{
    {{- $tail := rest $.Values.serviceSidecarData.dataPorts -}}   
    {{- range $tail }}
            "{{ . }}":{},
    {{- end }}
            "{{ first $.Values.serviceSidecarData.dataPorts }}":{}
      }
    }'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another challenge we faced was Google’s internal load balancer’s limit of five ports; which restricts our ability to handle traffic from multiple databases with a single sidecar deployment if an internal load balancer is used. Zonal NEGs are not available as a backend for Internal TCP load balancers or external TCP/UDP network load balancers on GCP.&lt;/p&gt;

&lt;p&gt;Further Reading on Network Endpoint Groups&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/google-cloud/container-load-balancing-on-google-kubernetes-engine-gke-4cbfaa80a6f6" rel="noopener noreferrer"&gt;Google’s Medium post on NEG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;NEG supports multiple backends including Internet, Zonal or Serverless groups. Detailed documentation is available &lt;a href="https://cloud.google.com/load-balancing/docs/negs" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/google-cloud-platform-by-cloud-ace/container-native-load-balancing-on-gcp-how-does-it-matter-adbc62c366f5" rel="noopener noreferrer"&gt;A comparison of NEG performance when used with http traffic by Cloud Ace&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>architecture</category>
      <category>kubernetes</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>API Security for the Data Layer</title>
      <dc:creator>Daniel Tobin</dc:creator>
      <pubDate>Sat, 30 May 2020 05:20:26 +0000</pubDate>
      <link>https://dev.to/dant24/api-security-for-the-data-layer-e0a</link>
      <guid>https://dev.to/dant24/api-security-for-the-data-layer-e0a</guid>
      <description>&lt;p&gt;Over the past week, there has been an average of at least 1 data breach news story per day around the world. The companies involved include mobile providers in India and Pakistan, a health care system based in St. Louis, law firms in the UK and more. As data privacy and security become front and center it’s time to take a step back and think about how one can successfully secure their data in a cloud-native world.&lt;/p&gt;

&lt;p&gt;Cloud-native data endpoints are public facing. As one thinks about securing them perhaps it makes sense to look at another public facing endpoint - an API - and glean lessons from it. &lt;/p&gt;

&lt;p&gt;One increasingly popular approach to API security is to centralize it using an API Gateway. API Gateways simplified the entry points into applications. They allow for dynamic routing based on the requestor. They have also allowed for centralized security standards and policies to be applied. These rules can be auto updated based on learnings. The gateway also logs all accesses to the API. These logs provide an audit trail, the ability to do analytics and can be used for forensics when something bad happens. An API Gateway leverages identities to control authentication and authorization against the API. Ideally, an API gateway provides all of these capabilities in a generic fashion across APIs of all personalities. &lt;/p&gt;

&lt;p&gt;If we are to prevent data layer security breaches then we need something akin to this for the data layer. We need technology that works across any type of data endpoint whether it is SQL, NoSQL, data warehouses, pipelines etc. This technology must be able to log all activity, detect anomalous behavior, and enforce fine grained access control on data. The need for this solution only grows as companies adopt a truly granular microservices architecture, where there is a database per service.&lt;/p&gt;

&lt;p&gt;So, how exactly would technology like this have prevented the breaches we’ve recently seen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By enforcing policies, it could have prevented access to certain attributes altogether. &lt;/li&gt;
&lt;li&gt;It could have alerted security teams or blocked certain connections after detecting exfiltration.&lt;/li&gt;
&lt;li&gt;It could have applied masking so that even if data left the system it did so in a fashion that made it useless for an attacker.&lt;/li&gt;
&lt;li&gt;It could have logged all accesses to the data leaving behind a rich trail for forensics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As with other security solutions, for it to be effective, any such data layer security solution must be able to sprint in a DevOps environment. It must be easily deployed in an Infrastructure as code environment. It should enable automation. It must work in a highly available and scalable environment. To truly enable foundational security, it must be built from the ground up and be central to the architecture from development all the way to production.&lt;/p&gt;

&lt;p&gt;Whenever we think about security in a general context, the data exposed is what drives the impact of the breach. Now more than ever, so much of our lives exist online. We are entrusting our data with companies and implicitly trusting that they will keep it secure. For the longest time, I was most excited to work in the intersection of security and finance because of the real and tangible protections that were needed there that impacted our everyday lives. Those protections are still needed, but our online data is worth so much more than just dollars and cents. Our privacy, our memories, our connections, our friends are all impacted. As a new parent, I would be devastated if all of the photos I have of my child were shared without my permission. I still see gaps in the way that companies are protecting our data, and that is why I am excited to be working with Cyral, to focus on building a product that monitors and protects the core of what we all care about, our data.&lt;/p&gt;

&lt;p&gt;Image by Abraham Pena via the OpenIDEO Cybersecurity Visuals Challenge under a Creative Commons Attribution 4.0 International License&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>API Security for the Data Layer</title>
      <dc:creator>Daniel Tobin</dc:creator>
      <pubDate>Wed, 06 May 2020 23:17:53 +0000</pubDate>
      <link>https://dev.to/cyral/api-security-for-the-data-layer-4c62</link>
      <guid>https://dev.to/cyral/api-security-for-the-data-layer-4c62</guid>
      <description>&lt;p&gt;Over the past week, there has been an average of at least 1 data breach news story per day around the world. The companies involved include mobile providers in India and Pakistan, a health care system based in St. Louis, law firms in the UK and more. As data privacy and security become front and center it’s time to take a step back and think about how one can successfully secure their data in a cloud-native world.&lt;/p&gt;

&lt;p&gt;Cloud-native data endpoints are public facing. As one thinks about securing them perhaps it makes sense to look at another public facing endpoint - an API - and glean lessons from it. &lt;/p&gt;

&lt;p&gt;One increasingly popular approach to API security is to centralize it using an API Gateway. API Gateways simplified the entry points into applications. They allow for dynamic routing based on the requestor. They have also allowed for centralized security standards and policies to be applied. These rules can be auto updated based on learnings. The gateway also logs all accesses to the API. These logs provide an audit trail, the ability to do analytics and can be used for forensics when something bad happens. An API Gateway leverages identities to control authentication and authorization against the API. Ideally, an API gateway provides all of these capabilities in a generic fashion across APIs of all personalities. &lt;/p&gt;

&lt;p&gt;If we are to prevent data layer security breaches then we need something akin to this for the data layer. We need technology that works across any type of data endpoint whether it is SQL, NoSQL, data warehouses, pipelines etc. This technology must be able to log all activity, detect anomalous behavior, and enforce fine grained access control on data. The need for this solution only grows as companies adopt a truly granular microservices architecture, where there is a database per service.&lt;/p&gt;

&lt;p&gt;So, how exactly would technology like this have prevented the breaches we’ve recently seen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By enforcing policies, it could have prevented access to certain attributes altogether.&lt;/li&gt;
&lt;li&gt;It could have alerted security teams or blocked certain connections after detecting exfiltration.&lt;/li&gt;
&lt;li&gt;It could have applied masking so that even if data left the system it did so in a fashion that made it useless for an attacker.&lt;/li&gt;
&lt;li&gt;It could have logged all accesses to the data leaving behind a rich trail for forensics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As with other security solutions, for it to be effective, any such data layer security solution must be able to sprint in a DevOps environment. It must be easily deployed in an Infrastructure as code environment. It should enable automation. It must work in a highly available and scalable environment. To truly enable foundational security, it must be built from the ground up and be central to the architecture from development all the way to production.&lt;/p&gt;

&lt;p&gt;Whenever we think about security in a general context, the data exposed is what drives the impact of the breach. Now more than ever, so much of our lives exist online. We are entrusting our data with companies and implicitly trusting that they will keep it secure. For the longest time, I was most excited to work in the intersection of security and finance because of the real and tangible protections that were needed there that impacted our everyday lives. Those protections are still needed, but our online data is worth so much more than just dollars and cents. Our privacy, our memories, our connections, our friends are all impacted. As a new parent, I would be devastated if all of the photos I have of my child were shared without my permission. I still see gaps in the way that companies are protecting our data, and that is why I am excited to be working with &lt;a href="https://www.cyral.com/" rel="noopener noreferrer"&gt;Cyral&lt;/a&gt;, to focus on building a product that monitors and protects the core of what we all care about, our data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Image by Abraham Pena via the OpenIDEO Cybersecurity Visuals Challenge under a Creative Commons Attribution 4.0 International License&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>From Firewalls to Security Groups</title>
      <dc:creator>Daniel Tobin</dc:creator>
      <pubDate>Fri, 24 Apr 2020 19:59:53 +0000</pubDate>
      <link>https://dev.to/cyral/from-firewalls-to-security-groups-2d5i</link>
      <guid>https://dev.to/cyral/from-firewalls-to-security-groups-2d5i</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally posted at&lt;/em&gt; &lt;a href="https://cyral.com/blog/from-firewalls-security-groups" rel="noopener noreferrer"&gt;&lt;em&gt;https://cyral.com/blog/from-firewalls-security-groups&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Several large enterprises we work with at Cyral are working on shifting to a fully cloud-native architecture, and end up leaning on us as a partner to help them fully leverage all the tools at their disposal (one of our engineers recently shared this excellent &lt;a href="https://www.linkedin.com/feed/update/urn:li:activity:6658825750139019267/" rel="noopener noreferrer"&gt;presentation&lt;/a&gt; he had made before a bank). One of the common themes we see is security teams worrying about firewalls becoming less effective in the cloud-native world, and we often find ourselves explaining how tools like AWS security groups are even more powerful and can be used instead. We thought it was worthwhile to write a blog post on this topic.&lt;/p&gt;

&lt;p&gt;For decades, companies have only relied on physical network devices called firewalls to wall off and protect their digital footprint. With the meteoric rise and adoption of cloud computing, these devices have been replaced by software defined access controls that now protect increasingly complex cloud native infrastructure. In a traditional network environment, a firewall was placed at the perimeter of the trusted and untrusted zones of the network, monitoring and typically blocking most traffic. In a cloud-native world, modern applications scale up and down ephemeral resources in response to traffic. These ephemeral instances no longer solely exist in a trusted network on site at a corporate office or dedicated data center and require new controls to protect them.&lt;/p&gt;

&lt;p&gt;Traditional firewalls were designed to parallel physical security controls. The term firewall was first used by T. Lightoler in 1764[1] in the design of buildings to separate rooms from others that would most likely have a fire such as a kitchen. Firewalls are still used in modern construction to this day. Depending on the type of construction, firewalls are used to slow the spread of a fire, between rooms in a single family home or between adjoining buildings such as in a row home or townhome in order for the occupants to be able to escape [&lt;a href="https://ncma.org/resource/detailing-concrete-masonry-fire-walls/" rel="noopener noreferrer"&gt;2&lt;/a&gt;]. In the physical world, firewalls are rated on the length of time provided to slow down a fire [&lt;a href="https://en.wikipedia.org/wiki/Firewall_(construction)#Performance_based_design" rel="noopener noreferrer"&gt;3&lt;/a&gt;]. In the digital world though, firewalls are often thought of as completely blocking outside threats and not merely as a temporary barrier that can eventually be breached.&lt;/p&gt;

&lt;p&gt;Network device firewalls were implemented to function as the gate or physical walls of a castle as put forth in the &lt;a href="https://www.sciencedirect.com/science/article/abs/pii/S0740624X16300120" rel="noopener noreferrer"&gt;Castle Model of security&lt;/a&gt;. This methodology called for building walls which protected the barriers but left the inside of the castle unprotected. Home networks and many corporate offices are still designed this way and still have devices acting as firewalls. These networks are generally end user computers and do not serve content to any other users. Computer network firewalls have existed “since about 1987” as detailed in &lt;a href="https://www.cs.unm.edu/~treport/tr/02-12/firewall.pdf" rel="noopener noreferrer"&gt;&lt;em&gt;A History and Survey of Network Firewalls&lt;/em&gt;&lt;/a&gt; published in 2002 by Kenneth Ingham and Stephanie Forrest. Firewalls have long been promised as the panacea for completely blocking attacks, instead though, they should be viewed as most physical firewalls are: a temporary barrier. To that end, many companies are now moving to a zero trust model where a firewall is only the first barrier protecting the outside and the inside is no longer implicitly trusted [&lt;a href="https://www.usenix.org/system/files/login/articles/login_dec14_02_ward.pdf" rel="noopener noreferrer"&gt;4&lt;/a&gt;].&lt;/p&gt;

&lt;p&gt;The power of a virtual firewall is that it no longer needs to only have coarse grained filters at the edges of a network. Virtual firewalls can now be assigned to groups of instances and be referenced by others in their configuration. Virtual firewalls are not beholden to network segmentation or physical location they were first set up for. In the classic three tier model, you can now create three virtual firewalls to protect the individual tiers and reference those tiers specifically. In this model, you create individual firewalls for the frontend, application and data layers. In the frontend firewall configuration, you allow https access to only those instances that are responsible for serving frontend content. At the second tier, you reference the frontend firewall only allowing it to access the application layer and block broad access to your application. Finally, at the data layer, you reference the application firewall and allow it direct access to the data layer but block all other access.&lt;/p&gt;

&lt;p&gt;In Amazon Web Services (AWS) these virtual firewalls are called security groups. One of the key differences between AWS security groups and classic firewalls is that you can only specify rules that allow traffic. All traffic is implicitly blocked except for the rules that you define to allow. The other key feature of security groups that may differ is that all rules are stateful. When you allow traffic in on a specific port, you do not need to specify allow rules for the return traffic. Security groups function similarly to the classic network firewall model, your allow rules specify a protocol (TCP or UDP) and a port. Security groups can not perform deep packet inspection based on the type of traffic that it evaluates.&lt;/p&gt;

&lt;p&gt;In the example below, we look at how you would configure three security groups for the classic three tier architecture. In each example, we choose either a predefined Type which will automatically fill out the Protocol and Port range, or you can choose the Protocol and Port range yourself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flxnuymc6pithg6usjg7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flxnuymc6pithg6usjg7h.png" alt="Alt Text" width="800" height="629"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 1. MyWebServer security group allows access to HTTP and HTTPS from anywhere&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff750uuxwdqr6dwsbovyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff750uuxwdqr6dwsbovyl.png" alt="Alt Text" width="800" height="513"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 2. MyApplicationServer security group only allows access to HTTPS directly from the MyWebServer security group. All other access is implicitly blocked&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2c6wx6yslmz421ah8euu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2c6wx6yslmz421ah8euu.png" alt="Alt Text" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 3. MyDatabaseServer security group only allows access from the MyApplicationServer security group. All other access is implicitly blocked&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AWS has now consolidated security group configuration at the VPC level. In the console, you can access their configuration from the EC2 page or via the VPC page. VPC’s can also implement &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html" rel="noopener noreferrer"&gt;network access control lists&lt;/a&gt; (ACL) that can provide yet another layer of security that is akin to a traditional firewall device. Network ACLs follow the standard firewall convention that you are familiar with including, inbound and outbound rules as well as applying rules in order. Network ACLs are best used as an enforcement of separation of duties, use Network ACLs to enforce minimum policy and security groups for fine grained control of instances. For example, a network ACL could be used to enforce SSH only from a bastion host preventing a security group opening up direct SSH. Security groups are much more flexible, whereas Network ACLs should be used as a backup mechanism.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F152z69m560gslqrbk06v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F152z69m560gslqrbk06v.png" alt="Alt Text" width="707" height="290"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 4. Default Network ACL giving your instances network access&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As your footprint grows, your security groups can quickly grow out of hand. We’ve found that managing security groups as code with &lt;a href="https://www.terraform.io/docs/providers/aws/r/security_group.html" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; or similar helps with this issue. You should also be mindful of security group quotas. The defaults are 2500 groups per region and 60 inbound and 60 outbound rules. Inbound and outbound rules are enforced separately for IPv4 vs IPv6. If enabled, Trusted Advisor will flag security groups that have more than 50 total rules for performance reasons.&lt;/p&gt;

&lt;p&gt;AWS has recognized many of the pitfalls associated with managing security groups per VPC per account and announced their &lt;a href="https://aws.amazon.com/firewall-manager/pricing/" rel="noopener noreferrer"&gt;AWS Firewall Manager&lt;/a&gt; service in 2018. This is an add on service to AWS Shield and AWS WAF. AWS Firewall Manager for security groups allows you to manage “security groups for your Amazon VPC across multiple AWS accounts and resources from a single place”. Read more on this service &lt;a href="https://docs.aws.amazon.com/waf/latest/developerguide/getting-started-fms-security-group.html" rel="noopener noreferrer"&gt;here&lt;/a&gt; or watch this &lt;a href="https://www.youtube.com/watch?v=w-zbsmpi7vw" rel="noopener noreferrer"&gt;tech talk&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;AWS security groups are an incredibly powerful tool when used in the context of a cloud-native environment. Their simplicity and focus on pure network traffic are a forcing function for clear separation of infrastructure tiers. Their simplicity also gives you the guarantee that they will not interfere with the speed with which you can scale your application. A cloud-native infrastructure provides you with the flexibility to leave the old guard behind and focus on what matters most.&lt;/p&gt;

&lt;p&gt;[1] Lightoler, T. 1764. &lt;em&gt;The gentleman and farmer’s architect. A new work. Containing a great variety of ... designs. Being correct plans and elevations of parsonage and farm houses, lodges for parks, pinery, peach, hot and green houses, with the fire-wall, tan-pit, &amp;amp;c particularly described ...&lt;/em&gt; R. Sayer, London, UK&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Image by Elio Reichert via the OpenIDEO Cybersecurity Visuals Challenge under a Creative Commons Attribution 4.0 International License&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
