<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nikolai Main</title>
    <description>The latest articles on DEV Community by Nikolai Main (@neakoh).</description>
    <link>https://dev.to/neakoh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/neakoh"/>
    <language>en</language>
    <item>
      <title>Using Grafana &amp; Prometheus</title>
      <dc:creator>Nikolai Main</dc:creator>
      <pubDate>Sat, 02 Nov 2024 15:11:01 +0000</pubDate>
      <link>https://dev.to/neakoh/using-grafana-prometheus-2mkn</link>
      <guid>https://dev.to/neakoh/using-grafana-prometheus-2mkn</guid>
      <description>&lt;p&gt;Seeing as Grafana and Prometheus go hand in hand, I'll include them in the same post and keep each section relatively short. I'll provide a brief overview of each tool along with some basic getting started instructions. The finer details can, of course, be found in their respective documentation. Additionally, I'll briefly cover setting up alerts with Grafana and Slack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2otq0q2ehq6f96gjkpgl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2otq0q2ehq6f96gjkpgl.jpg" alt="image.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Prometheus
&lt;/h2&gt;

&lt;p&gt;Prometheus is a monitoring and alerting tool providing a wealth of insights for almost any system. It records time-series data, or simply put a series of changes to a given metric over a given time period. Common measurements include request times, active connections, resource usage etc.&lt;/p&gt;

&lt;p&gt;Prometheus' main components are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Prometheus Server&lt;/strong&gt;: Responsible for 'scraping' time series data.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Client Libraries&lt;/strong&gt;: Used in instrumenting application code.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Alert Manager&lt;/strong&gt;: Alert handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prometheus supports machine based metrics as well as dynamic service oriented architectures. Since each prometheus server is standalone you can rely on it to diagnose issues with your other systems that may experience outages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Prometheus
&lt;/h3&gt;

&lt;p&gt;Collecting metrics is done via process called 'scraping'. A metric source, or 'instance', usually corressponds to a single process. Once you link an instance to your prometheus server it will begin collecting data based on a configuration you define in a yaml file.&lt;/p&gt;

&lt;p&gt;An example Prometheus configuration could look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:

  scrape_interval: 15s

  evaluation_interval: 15s

alerting:

  alertmanagers:

    - static_configs:

        - targets:

           - alertmanager:9093

rule_files:

   - rules.yml

scrape_configs:

  - job_name: "prometheus"

    static_configs:

      - targets: ["localhost:9090"]

  - job_name: node_exporter

    static_configs:

      - targets: ["localhost:9100"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Global Config&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Contains settings like scrape_interval, evaluation interval etc..&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Alerting&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;States where alerts are sent for further handling.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Rules&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Files containing the metric thresholds that trigger alerts.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Scrape Configs&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;A list of every location from which metrics are gathered.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;There are several other configuration settings that can be defined however I won't cover them in this post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Metric Types
&lt;/h3&gt;

&lt;p&gt;Prometheus supports four different metric types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Counter&lt;/strong&gt;: A single value that can only increase or be reset to zero. A simple metric that can be used for counting network requests.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Gauge&lt;/strong&gt;: A single value that can both increase and decrease. Commonly used for counting pods in a K8 cluster or events in an SQS queue.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Histogram&lt;/strong&gt;: Collects data in several quantiles, or 'buckets', defined by the user. Each value recorded into the histogram will increment all buckets where the value is less than or equal to the value of the bucket. For example, You have the buckets: 0.1, 0.5, 1 and 5. If a value of 0.3 is recorded buckets 0.5, 1 and 5 will be increased. From here you can deduce the distribution of values by subtracting the value of the previous bucket from the current:

&lt;ul&gt;
&lt;li&gt;  0.1 has a value of 0 and has no preceding bucket , Therefore it has no recorded values.&lt;/li&gt;
&lt;li&gt;  0.5 has a value of 1 and the preceding bucket has a value of 0, totalling 1.&lt;/li&gt;
&lt;li&gt;  1 has a value of 1 and the preceding bucket a value of 1, totalling 0.&lt;/li&gt;
&lt;li&gt;  5 has a value of 1, preceding bucket also 1, totalling 0. As such, 100% of all values fall under the 0.5 quantile. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Now an example with a single value doesn't really demonstrate a histogram's utility, but take an organization required to serve 95% of all network requests within 300ms. You could set up a bucket with a value of 0.3 and an alert to trigger if the number of requests in that percentile drops below 0.95, enabling you to investigate the issue and notify relevant parties.&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Summary&lt;/strong&gt;: Similar function to histograms but data isn't collected in buckets and instead quantiles are created and estimated by the prometheus server.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  General Usage
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Configure metric collection locations.

&lt;ol&gt;
&lt;li&gt;   If using Kubernetes, Prometheus can automatically scrape clusters to retrieve resource type metrics.&lt;/li&gt;
&lt;li&gt; If you wish to collect application specific metrics like HTTP request failures, You need to confugure instrumentation within the application using the relevant metrics library.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Define scrape config and any other parameters (as described above) in the prometheus.yml file.&lt;/li&gt;

&lt;li&gt; Access your prometheus dashboard (localhost:9090 by default). From here you can run queries and view the data as simple graphs.&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;Prometheus also provides the ability to create alerts however, The process of creating and handling alerts is much more intuitive and streamlined in Grafana.&lt;/p&gt;




&lt;h2&gt;
  
  
  Grafana
&lt;/h2&gt;

&lt;p&gt;Grafana is a monitioring and visulization tool that can ingest data from a wide range of sources like MySQL, MongoDB, Postgres etc.. to Kubernetes, AWS, Docker and seemingly everything else.&lt;/p&gt;

&lt;p&gt;Grafana offers several features.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Panels and Visualisation&lt;/strong&gt;: These are composed of a query - a specific metric from a data source - and a visualization. Grafana offers several types of visualization from standard line time-series graphs to heatmaps.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Dashboards&lt;/strong&gt;: These are a set of one or more panels that provide a quick overview of the information related to a specific metric or data source.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Alerting&lt;/strong&gt;: Define alerts based on metrics from your data sources and send alerts via email or to other messaging solutions like Slack.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Querying&lt;/strong&gt;: Since Grafana can ingest data from several different sources, many of which use different querying languages, They offer a powerful query engine that allows you to create custom, complex queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-juXrWYkCYA%2F04076ed358a7ab48ecd81db85bf46ab1443831147e320e135cf089b41f99f4ed00bf0732acdccb8f44edfd5952c881676bf0e9154d45812fcb5fcffdf1cda102dd4ddfbcb953bfd3b6b4ce1c047fa3dcb6b4679105436133131c1a5ad64f3b31d2c7925d" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-juXrWYkCYA%2F04076ed358a7ab48ecd81db85bf46ab1443831147e320e135cf089b41f99f4ed00bf0732acdccb8f44edfd5952c881676bf0e9154d45812fcb5fcffdf1cda102dd4ddfbcb953bfd3b6b4ce1c047fa3dcb6b4679105436133131c1a5ad64f3b31d2c7925d" alt="image.png" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Authentication, Authorization
&lt;/h4&gt;

&lt;p&gt;Firstly there are 3 types of roles in Grafana:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Viewer&lt;/strong&gt;: Has read-only access to dashboards and panels.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Editior&lt;/strong&gt;: Can create, edit and delete dashboards. Also has access to annotations and alerting.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Admin&lt;/strong&gt;: Full access to Grafana instance. All of the above and user base control.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Grafana provides the ability to configure basic authentication and authorization within the app. You can create individual users with custom userpass credentials&lt;/p&gt;

&lt;p&gt;Alternatively you can use an existing federation or implement one of several oAuth providers like Google or Github.&lt;/p&gt;

&lt;p&gt;If you have certain teams within your organization that should only see data relevant to them you can create groups to enable this.&lt;/p&gt;

&lt;h4&gt;
  
  
  Best Practices
&lt;/h4&gt;

&lt;p&gt;If you are running other services on the same server where Grafana is running it's best to incorporate other measures to secure your applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Configure Grafana to only allow data from trusted sources.&lt;/li&gt;
&lt;li&gt;  Restrict Grafana's communication with other services running on the server.&lt;/li&gt;
&lt;li&gt;  Configure a proxy server to filter all network traffic.&lt;/li&gt;
&lt;li&gt;  Those with the viewer role can query any data source in an organization so it's important to be selective with the data you expose.  &lt;/li&gt;
&lt;li&gt;  Disallow anonymous access.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Basic Usage
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Start the Grafana server. 
&lt;code&gt;Assuming you're self hosting and not using Grafana cloud, Access the console at localhost:3000&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt; Add a new data source. 
&lt;code&gt;If you're using Grafana for the first time you should have a panel with some getting started buttons. Otherwise click the menu on the left hand side &amp;gt; connections &amp;gt; data sources&lt;/code&gt;

&lt;ol&gt;
&lt;li&gt; Enter connection details. If using prometheus on default settings enter &lt;a href="http://localhost:9090" rel="noopener noreferrer"&gt;http://localhost:9090&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Create a dashboard

&lt;ol&gt;
&lt;li&gt; In the query search box find the relevant metric.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-EPASLgrsXk%2F530498f419f0c5d51ef1392409aa89d8a22ddfe6e493c804bacf30737388f32b47022bd4d9bc227143e6d34c79c2d8bfbb3d46e759aa8bd303dddba294d557c00ae5b51c7f9c57c83ade528f9a0bf1618fcaf323860acf31a7b7193f76e1b2886270e58e" alt="image.png" width="697" height="211"&gt;
&lt;/li&gt;
&lt;li&gt; Click run queries and your data should show up in the panel above.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Once done click save.&lt;/li&gt;

&lt;/ol&gt;

&lt;h3&gt;
  
  
  Alerting
&lt;/h3&gt;

&lt;p&gt;As mentioned above, Grafana also provides alerting capabilities similar to Prometheus. The process however, is much more simple as it can all be done in Grafana and doesn't require the configuration of another component. I'll explain briefly how to send alerts to Slack:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slack&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a Slack workspace.&lt;/li&gt;
&lt;li&gt; Go to Slack API and create an app. (&lt;a href="https://api.slack.com/" rel="noopener noreferrer"&gt;https://api.slack.com/&lt;/a&gt;)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-xyRBRCZrIv%2F17f565f33acaf1500bba2a8371a98e8c5c80fb3ba8694db37b01d7895b3dd5057d5b730798c3919c4bf93aae95f41a1e452cefa500219c0bf50905c0c83553b08e0a7e2fa1eccc84f8da1f84456b7c0440e68c75dbffd9c9c3465c3690d05ccb33855cf8" alt="image.png" width="800" height="209"&gt;
&lt;/li&gt;
&lt;li&gt; Click on incoming webhooks and enable them.

&lt;ol&gt;
&lt;li&gt; Click add a new webhook, select your workspace and relevant channel.&lt;/li&gt;
&lt;li&gt; Copy the webhook url.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Go to the Grafana console and from the menu on the left and select alerting &amp;gt; Contact points

&lt;ol&gt;
&lt;li&gt; Create a contact point &amp;gt; Select slack under integrations &amp;gt; Enter webhook URL.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; From the same menu, Alerting &amp;gt; alert rules. Create a new alert rule.

&lt;ol&gt;
&lt;li&gt; Enter a name&lt;/li&gt;
&lt;li&gt; Define query and alert condition

&lt;ol&gt;
&lt;li&gt; Select the metric you wish to create an alert for.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-RJ3Qk5e_MA%2F94bf30212554ef9aadc234a9be988a0c22beb923a53c0e2a152808046572ded456e4990a95ba5dc7eeaece336aaa5cdc98474a2ca67d5d6526ec4bbd00bd8cb32373340352e67975b557e59a16a5d78d3c32fec62f9fa0706f018ed593266754504c598a" alt="image.png" width="775" height="205"&gt;
&lt;/li&gt;
&lt;li&gt; Create an expression. For example, your metric value crossing a certain theshold.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-1Hy0FojiuS%2Fb392bf1ebc35cb7469cf0a66c1af76d6d6a9f76f77ece293e224b06559e1fac726fc351c0ddfe2cacb1d911e922fd12ff2805e37e22b4f4d69c102c7c10d621b1941e378dd1266870447b0e48d44859da1e21eca0a070dd05633bea35ff617008151c824" alt="image.png" width="776" height="166"&gt;
&lt;/li&gt;
&lt;li&gt; Set it as the alert condition.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Evaluation behavior.

&lt;ol&gt;
&lt;li&gt; If no rule folders exist, create one.&lt;/li&gt;
&lt;li&gt; Similarly, if no evaluation groups exist, create one.&lt;/li&gt;
&lt;li&gt; Set the pending period. For the purpose of this set it to none.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Configure labels and notifications.

&lt;ol&gt;
&lt;li&gt; For simplicity just select the Slack channel as the contact point.&lt;/li&gt;
&lt;li&gt; Don't bother with labels for now.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Enter a summary and description for the alert if necessary.&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;Now trigger your alert and you should see a notification in your Slack channel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;Keeping track of all types of metrics within your infrastructure, applications, pipeline, etc., is a vital practice in ensuring your entire system stays healthy. As such, the pairing of Prometheus and Grafana provides an excellent solution to this problem. Prometheus excels in collecting and storing time-series data, offering powerful querying capabilities, while Grafana enhances this by providing intuitive and customizable dashboards for visualizing these metrics.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Kubernetes</title>
      <dc:creator>Nikolai Main</dc:creator>
      <pubDate>Mon, 28 Oct 2024 15:09:28 +0000</pubDate>
      <link>https://dev.to/neakoh/understanding-kubernetes-5cpo</link>
      <guid>https://dev.to/neakoh/understanding-kubernetes-5cpo</guid>
      <description>&lt;p&gt;This turned out to be a relatively lengthy post, but it represents the culmination of my recent study of Kubernetes, where I aimed to gain a comprehensive understanding of its core concepts and features. Throughout my devops research, I came to realise that prioritizing security considerations before any deployment is a more practical and effective approach to implementing technology. As a result, my study focused on two main areas: the fundamental structural components of Kubernetes and the essential security practices associated with it.&lt;/p&gt;

&lt;p&gt;Due to resource constraints I was limited to how much of Kubernetes I could explore. However, I used two tools provided me enough access to learn it at a moderate level. These included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Minikube&lt;/strong&gt;: A lightweight Kubernetes implementation that creates a virtual machine on any OS and deploys a simple cluster containing only one node.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Kind&lt;/strong&gt;: A tool for running local Kubernetes clusters using Docker container "nodes."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Minikube, you are limited to a single cluster, which, for my purposes, was not an issue, as it is more than capable of running a simple three-tier application for reference.&lt;/p&gt;

&lt;p&gt;In contrast, Kind offers more flexibility, allowing you to create multiple clusters and control various aspects of them&lt;/p&gt;

&lt;p&gt;Nonetheless, this is my account of Kubernetes at a basic yet sufficient level, highlighting it's essential concepts, features, and tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  Kubernetes Overview
&lt;/h2&gt;

&lt;p&gt;Kubernetes is a fleet container management system that automatically manages the state of a multi-container system. Some key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Service Discovery and Load Balancing&lt;/strong&gt;: Kubernetes can expose a container based on its DNS or IP. It can also automatically divert traffic to a different container if one is particularly busy.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Storage Orchestration&lt;/strong&gt;: Automatically mounts a storage option of your choice.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Automated Rollouts and Rollbacks&lt;/strong&gt;: Kubernetes will automatically meet your desired state in a controlled manner.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Automatic Bin Packing&lt;/strong&gt;: You can specify your CPU and memory requirements, and Kubernetes will automatically allocate the necessary resources to each container.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Self-Healing&lt;/strong&gt;: Kubernetes automatically restarts containers that have been shut down.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Secret Management&lt;/strong&gt;: Manages sensitive information such as passwords and tokens.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Batch Execution&lt;/strong&gt;: Kubernetes can manage batch and CI workloads.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Horizontal Scaling&lt;/strong&gt;: Easily scale applications up or down based on demand.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;IPv4/IPv6&lt;/strong&gt;: Allocation of IPv4/IPv6 addresses to pods and services.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Clusters
&lt;/h3&gt;

&lt;p&gt;Kubernetes coordinates a highly available cluster of computers that work together as a single unit. The Kubernetes cluster architecture consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Control Plane&lt;/strong&gt;: This coordinates the cluster and handles tasks such as scheduling applications, maintaining the desired state, scaling applications, and rolling out updates. This is governed by the &lt;strong&gt;Master Node&lt;/strong&gt;, which contains several components:

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;API Server&lt;/strong&gt;: The front end for the control plane, handling all REST commands.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scheduler&lt;/strong&gt;: Assigns work to nodes based on resource availability.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Controller Manager&lt;/strong&gt;: Manages controllers that regulate the state of the cluster.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;etcd&lt;/strong&gt;: A distributed key-value store that holds the cluster's state and configuration.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Node Workers&lt;/strong&gt;: A node is a VM or physical machine that serves as a worker in a cluster. Each node has a kubelet, which is an agent that handles communication between the node and the control plane.&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Pods&lt;/strong&gt;: Within each node, there exists one or more Pods that run application workloads. Each pod can contain multiple containers.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;In production workloads, it's recommended to have at least three nodes to ensure you don't experience any downtime if a node goes down. If one node fails, a control plane instance is lost, so having redundant nodes is a sensible choice.&lt;/p&gt;

&lt;p&gt;I've come to learn, that as production workloads really start to scale, it's also sensible to have multiple control planes for the same reasons.&lt;/p&gt;




&lt;h3&gt;
  
  
  Services
&lt;/h3&gt;

&lt;p&gt;Services provide the ability to connect to a deployment. They are the abstraction that allows pods to die and replicate in Kubernetes without impacting your application. Some types of services include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;ClusterIP&lt;/strong&gt;: The default service type, exposing the service on a cluster-internal IP. Used for internal communication within the cluster.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;NodePort&lt;/strong&gt;: Exposes the service on a static port on each node's IP address, allowing external access.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;LoadBalancer&lt;/strong&gt;: Creates an external load balancer to distribute traffic to the service.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;ExternalName&lt;/strong&gt;: Maps the service to an external DNS name.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Jobs
&lt;/h3&gt;

&lt;p&gt;A job is a resource that creates one or more pods for the purpose of completing a task and then gets destroyed. Kubernetes offer 3 types of job:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Single-Completion Job&lt;/strong&gt;: Self-explanatory, A single pod is created to complete a task.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Parallel Job&lt;/strong&gt;: Allows several pods to run concurrently.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cron Job&lt;/strong&gt;: Runs jobs on a scheduled basis.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Namespaces
&lt;/h3&gt;

&lt;p&gt;Namespaces are a means of creating a virtually separated section of a Kubernetes cluster. This is useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Isolating Resources&lt;/strong&gt;: Several teams may work on the same cluster and would want to keep their resources separate from other teams, preventing possible conflicts.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Environment Separation&lt;/strong&gt;: Allocate space for development, testing, and production.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Access Control&lt;/strong&gt;: Define who can access and manage resources.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Organizational Clarity&lt;/strong&gt;: Keep resources organized and easy to manage.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Deployments
&lt;/h2&gt;

&lt;p&gt;A deployment is a higher-level abstraction that manages the lifecycle of a set of replicas of a pod. In other words, it manages the desired state of a given application, defining how many replicas are needed to ensure constant uptime and efficient healing.&lt;/p&gt;

&lt;p&gt;Some key features of deployments include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Declarative Definition&lt;/strong&gt;: You can declare the desired state of your deployment in a Deployment configuration. Kubernetes then automatically tries to match that state.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Rolling Updates&lt;/strong&gt;: You can update your applications without experiencing any downtime. Kubernetes replaces old pods with new ones, ensuring that some instances are always available.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Rollback&lt;/strong&gt;: If an update fails, you can easily revert to a stable version, as Kubernetes keeps track of deployment history.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scaling&lt;/strong&gt;: It's very easy to make changes to deployment requirements. Kubernetes will add or destroy pods automatically to match the desired number of replicas.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Self-Healing&lt;/strong&gt;: If a pod fails or is destroyed, the deployment controller automatically creates a new pod to replace it, maintaining the replica count.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simple deployment may look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - name: example-container
        image: example-image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notes&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  kind  specifies the type of resource, in this case a deployment.&lt;/li&gt;
&lt;li&gt;  metadata  - standard metadata section&lt;/li&gt;
&lt;li&gt;  spec - contains information regarding your desired state like the number of desired replicas, container image etc..&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scaling
&lt;/h3&gt;

&lt;p&gt;Kubernetes offers two types of scaling: Horizontal and Vertical. Both of which are common terms in cloud architecture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Horizontal scaling refers to the addition or removal of pods to handle load requirements.&lt;/li&gt;
&lt;li&gt;  Vertical scaling refers to the scaling of individual resources. So allocating more or less CPU/memory to existing pods. Kubernetes does not automatically handle this type of scaling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Autoscaling functionality can be defined as a resource, A horizontal scaler, for example, might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: example-deployment
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also manually scale a deployment using the command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;kubectl scale &amp;lt;deployment&amp;gt; --replicas=&amp;lt;num&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Rolling Updates
&lt;/h3&gt;

&lt;p&gt;To maintain uptime, Kubernetes provides the ability to update pods by scheduling the creation of new, updated pods and waiting for those to deploy before destroying the old pods. Rolling updates allow the following actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Promote an app from one environment to another.&lt;/li&gt;
&lt;li&gt;  Roll back to previous versions.&lt;/li&gt;
&lt;li&gt;  Implement CI/CD with zero downtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Updates are triggered by any change to your deployment like the changing of application image which can be done with the following command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;kubectl set image &amp;lt;deployment_name&amp;gt; &amp;lt;image&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once an update is underway there are several useful commands that can check the status of the update, pause the update or even revert to previous versions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;kubectl rollout status &amp;lt;deployment_name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl rollout undo &amp;lt;deployment_name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl rollout history deployment/&amp;lt;deployment_name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Config
&lt;/h3&gt;

&lt;p&gt;There are several ways to set environment variables in Kubernetes: Dockerfile, Kubernetes YAML, Kubernetes ConfigMaps, and Kubernetes Secrets.&lt;/p&gt;

&lt;p&gt;A benefit of using ConfigMaps and Secrets is that they can be reused across containers. Both ConfigMaps and Secrets are API objects that store key-value pairs, but Secrets are used to store confidential/sensitive information.&lt;/p&gt;

&lt;p&gt;ConfigMaps can be configured as either environment variables or volumes. If you make a change to a ConfigMap configured as an environment variable the change isn't reflected until the relevant pods are manually refreshed using kubectl rollout. If configured as volume however, the pod recognizes the changes almost immediately.&lt;/p&gt;

&lt;p&gt;Immutable ConfigMaps are a suitable option for configurations that are expected to be constant and not change over time. Marking a ConfigMap as immutable provides performance benefits, as the kubelet in a node does not need to watch for changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Storage
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Volumes
&lt;/h3&gt;

&lt;p&gt;A volume is a directory that allows data to be stored beyond the lifecycle of a container but not beyond a pod. When a pod is destroyed, so is the volume. This is useful if you wish to share data between containers within a pod.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent Volumes
&lt;/h3&gt;

&lt;p&gt;Persistent Volumes (PV) are pieces of storage in the cluster that have been either manually or dynamically provisioned using Storage Classes, allowing data to outlive the lifecycle of individual pods. PVs are cluster resources, similar to nodes.&lt;/p&gt;

&lt;p&gt;Persistent volumes are created when a user makes a Persistent Volume Claim (PVC). These 'claims' specify size, access modes, and other requirements and are used by pods to request storage resources.&lt;/p&gt;

&lt;p&gt;PVs support different access modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;ReadWriteOnce (RWO)&lt;/strong&gt;: Can be mounted as read-write by a single node.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;ReadOnlyMany (ROX)&lt;/strong&gt;: The volume can be mounted as read-only by many nodes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;ReadWriteMany (RWX)&lt;/strong&gt;: The volume can be mounted as read-write by many nodes.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pod Security Standards
&lt;/h3&gt;

&lt;p&gt;There are three tiers of security allowance in Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Privileged&lt;/strong&gt;: Pods can run with any privilege.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Baseline&lt;/strong&gt;: Provides several sets of security improvements but does not restrict functionality.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Restricted&lt;/strong&gt;: Enforces several of the baseline rules halting deployment if they are in violation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the restricted tier, specific privileges and configurations are enforced, meaning that if a pod violates these rules, it will not be allowed to deploy. In contrast, the baseline tier provides recommendations, and while it may notify you of potential security improvements, it does not prevent deployment.&lt;/p&gt;

&lt;p&gt;To enforce pod security, the following command is used on the relevant namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl annotate namespace example-namespace
  pod-security.kubernetes.io/enforce=restricted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command sets the Pod Security Admission controller to enforce the restricted level of security for all pods created in the specified namespace.&lt;/p&gt;




&lt;h2&gt;
  
  
  Security at the Namespace Level
&lt;/h2&gt;

&lt;p&gt;The main way security is enforced at the namespace level is through RBAC (Role-Based Access Control). RBAC is controlled and applied with the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Roles&lt;/strong&gt;: Define a set of permissions for resources in a namespace.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;RoleBindings&lt;/strong&gt;: Associate a user or group with a role.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example of Roles and RoleBindings
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: example-namespace
  name: example-role
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: example-rolebinding
  namespace: example-namespace
subjects:
- kind: User
  name: example-user
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: example-role
  apiGroup: rbac.authorization.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Seccomp
&lt;/h2&gt;

&lt;p&gt;Seccomp is built into the Linux kernel and stands for Secure Computing Mode. It provides a mechanism to limit the attack surface of applications by allowing only a predefined set of system calls.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are System Calls?
&lt;/h3&gt;

&lt;p&gt;System calls are the primary means of communication between user-space applications and the Linux kernel. When a process is run with a seccomp profile, it can only execute system calls that are allowed by that profile.&lt;/p&gt;

&lt;p&gt;Some common system calls include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;File Operations&lt;/strong&gt;: open(), read(), write(), close().&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Process Management&lt;/strong&gt;: fork(), exec(), wait().&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Networking&lt;/strong&gt;: socket(), bind(), connect().&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Seccomp Modes
&lt;/h3&gt;

&lt;p&gt;Seccomp offers two modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Strict Mode&lt;/strong&gt;: Only a limited set of system calls are allowed.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Filtered Mode&lt;/strong&gt;: More flexible, allowing the use of BPF (Berkeley Packet Filter) to define complex rules for system call filtering.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can run a node with default profiling by running the kubelet with the --default-seccomp flag. The default profile provides a strong set of security defaults while maintaining the functionality of the workload.&lt;/p&gt;




&lt;h2&gt;
  
  
  Policies
&lt;/h2&gt;

&lt;p&gt;Policies are configurations that manage other configurations or runtime behaviors. These can be used to manage network traffic, resource allocation, or consumption.&lt;/p&gt;

&lt;p&gt;They can be applied using one of the following methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Admission Controllers&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  Admission controllers run in the API server and can validate or mutate API requests. They intercept requests to the API prior to the persistence of the object but after the request is authenticated and authorized. They limit requests to create, delete, and modify objects.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Validating Admission Policies&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  VAPs allow the creation of configurable validation checks using the declarative language CEL (Common Expression Language). For example, they can be used to deny the use of the 'latest' image tag.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Dynamic Admission Control&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  Dynamic Admission Controllers run outside the API server as separate applications that receive webhook requests to perform validation or mutation of API requests. They can perform complex checks, including those that require other cluster resources or external data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Kubelet Configurations&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  Some kubelet configurations can act as policies, such as PID limits and reservations or Node Resource Managers.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Useful Commands
&lt;/h2&gt;

&lt;p&gt;It quickly became very repetitive typing kubectl out so frequently so I sought a means of shortening my call to it and came across the ability to create/provide aliases for processes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating an alias
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  PowerShell:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;New-Alias &amp;lt;desired-alias&amp;gt; &amp;lt;command&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Bash:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;alias desired-alias="command"&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Useful kubectl Commands
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;kubectl cluster-info&lt;/code&gt; Prints info relating to the current or named cluster.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl expose deployment &amp;lt;name&amp;gt; --type=&amp;lt;service_type&amp;gt;&lt;/code&gt; Creates a service.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl get &amp;lt;resource&amp;gt;&lt;/code&gt; Lists all resources of a given type.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl describe &amp;lt;resource&amp;gt;&lt;/code&gt; Shows details of a resource or group of resources.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl create &amp;lt;resource&amp;gt; &amp;lt;name&amp;gt; --image=&amp;lt;image&amp;gt;&lt;/code&gt; Creates a deployment.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl apply &amp;lt;path_to_yaml_file&amp;gt;&lt;/code&gt; Executes a deployment definition.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl edit &amp;lt;resource_type&amp;gt; &amp;lt;resource&amp;gt;&lt;/code&gt; Edits a resource.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl port-forward &amp;lt;type/name&amp;gt; host:container&lt;/code&gt; Forwards a port.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl logs &amp;lt;pod_name&amp;gt;&lt;/code&gt; Gets logs from a container in a pod.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;kubectl exec -ti &amp;lt;pod_name&amp;gt; &amp;lt;command&amp;gt;&lt;/code&gt; Executes a commands within a container.

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-t (TTY)&lt;/code&gt; Allocates a pseudo-TTY, allowing interaction.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-i (interactive)&lt;/code&gt; Keeps the standard input open.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;code&gt;kubectl label &amp;lt;resource_type&amp;gt; &amp;lt;resource&amp;gt; &amp;lt;label&amp;gt;&lt;/code&gt; Adds a label to a resource.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;As of this post, aside from the Minikube examples, I have yet to deploy my own application on Kubernetes. However, based on what I have learned over the past few months, I can see how straightforward the process can be. I now understand why Kubernetes is such a popular tool for modern application deployment, given its automated node management, self-healing capabilities, and robust networking features. Additionally, I see how easilyKubernetes integrates with CI/CD tools and Infrastructure as Code (IaC) practices and as such I plan to deploy a previous project on Kubernetes.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Static Hosting on S3 w/ Cloudfront Distribution.</title>
      <dc:creator>Nikolai Main</dc:creator>
      <pubDate>Thu, 24 Oct 2024 09:49:24 +0000</pubDate>
      <link>https://dev.to/neakoh/static-hosting-on-s3-w-cloudfront-distribution-j1k</link>
      <guid>https://dev.to/neakoh/static-hosting-on-s3-w-cloudfront-distribution-j1k</guid>
      <description>&lt;p&gt;AWS offers a free and easy solution for hosting static sites. Possible uses for a static site include personal blogs, portfolio sites, or even as a disaster recovery option - If your main service goes down, you can automatically route users to a static site that explains the situation.&lt;/p&gt;

&lt;p&gt;The process of creating a static site is straightforward and can be integrated with CloudFront, a caching service that speeds up retrieval by storing website data at edge locations closer to your users. Additionally, Amazon's free TLS certificate service, ACM, can be implemented to provide HTTPS validation for the site.&lt;/p&gt;

&lt;h2&gt;
  
  
  Components Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Route 53&lt;/strong&gt; - DNS service&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cloudfront&lt;/strong&gt; - Caching service&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Certificate manager&lt;/strong&gt; - Provides HTTPS certificates&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;S3&lt;/strong&gt; - Bucket storage.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Lambda&lt;/strong&gt; (Optional) - Create a Lambda function to invalidate the cloudfront cache each time you make changes to your S3 bucket&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-CDbNPf_4-V%2Ffc0b927d986a0f78dcd7522cd6ebcdcd18b28f04206aae5cd8833f85f7c7a7f6c14404acecb2db8bcac4bf74e60fc6aac59d2b3ae36a61c8f48f0e662b67eb4e6739a48b7d6a47006091351b9772d68706e36b6461f90677dd2d073df873c485e749726a" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-CDbNPf_4-V%2Ffc0b927d986a0f78dcd7522cd6ebcdcd18b28f04206aae5cd8833f85f7c7a7f6c14404acecb2db8bcac4bf74e60fc6aac59d2b3ae36a61c8f48f0e662b67eb4e6739a48b7d6a47006091351b9772d68706e36b6461f90677dd2d073df873c485e749726a" alt="Blank diagram.jpeg" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data flow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; User makes a request to &lt;em&gt;yourdomainname.com&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt; Route53 resolves the request and passes it to CloudFront.&lt;/li&gt;
&lt;li&gt; CloudFront reroutes any HTTP traffic to HTTPS and returns tls certificate to validate.&lt;/li&gt;
&lt;li&gt; If nothing is cached in CloudFront - Or your caching settings require it -  The S3 bucket is queried and retrieves the website.&lt;/li&gt;
&lt;li&gt; Content is sent to user.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Creation Process
&lt;/h2&gt;

&lt;h3&gt;
  
  
  S3
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Create new S3 bucket.&lt;/li&gt;
&lt;li&gt; Turn 'block public access' off.&lt;/li&gt;
&lt;li&gt; Under object ownership enable ACLs.&lt;/li&gt;
&lt;li&gt; Go to your bucket’s properties and scroll all the way to the bottom &amp;gt;  Enable static hosting.

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Make sure you set your index document properly, ensuring it’s named correctly and isn't nested in any folders.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; When uploading your website, make sure to enable public access for all necessary files.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Route 53
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Register a new domain with Route53&lt;/li&gt;
&lt;li&gt; Create hosted zone

&lt;ul&gt;
&lt;li&gt; Name the hosted zone with your domain name.&lt;/li&gt;
&lt;li&gt; Ensure the allocated ns (name server) records match the registrars.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-aLIfVFERBZ%2F637d52c249b6e04cb1b02ecbf350d487dc3b793c895428d831839c40d5a7d4e9e6f986808cde60fd72524234a85929dee827370f91769fd3aaef88e43f64ea109312227ade1ab93f3690b830e795483e15c794e14c8855f43b02b91d77dfab4c4c50179b" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-aLIfVFERBZ%2F637d52c249b6e04cb1b02ecbf350d487dc3b793c895428d831839c40d5a7d4e9e6f986808cde60fd72524234a85929dee827370f91769fd3aaef88e43f64ea109312227ade1ab93f3690b830e795483e15c794e14c8855f43b02b91d77dfab4c4c50179b" alt="image.png" width="800" height="224"&gt;&lt;/a&gt;&lt;em&gt;Domain Registrar&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-astPxd2f-7%2F61307d24cee4d1df9ced75d73f57ed5ad719a9af13b10e1cda376e49a7936d01a1460318705b9191fe53f1cc7abbdb2aa5fddd4230b19b452c63d67659122cc353185559d2af7f565462d682849e4507fbf1c1344bda53178dc8c86cc6d606cf60ff6f80" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-astPxd2f-7%2F61307d24cee4d1df9ced75d73f57ed5ad719a9af13b10e1cda376e49a7936d01a1460318705b9191fe53f1cc7abbdb2aa5fddd4230b19b452c63d67659122cc353185559d2af7f565462d682849e4507fbf1c1344bda53178dc8c86cc6d606cf60ff6f80" alt="image.png" width="800" height="398"&gt;&lt;/a&gt;&lt;em&gt;Hosted Zone&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ACM
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Request certificate from ACM - &lt;em&gt;When requesting a certificate from ACM, make sure your region is set to N. Virginia (us-east-1). This is important because Amazon's administrative infrastructure is primarily located in this region, and certain services, like CloudFront, require certificates to be issued from us-east-1.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt; Validate your certificate by creating records in your Route 53 hosted zone.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-r-pldYnGb_%2F54f0b23519bb24019b3a070b2912ba5fc562d4311d4e41dc4390c1c86804598e96c39221273e02e736b49aad19003c4abc4e4eaa63ee8e80cd7d9a00019ea6e874d922776ca4a7c6b980051243928b05ac5d409e82a7dff69d25d84b725a0b7fb6179b7b" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-r-pldYnGb_%2F54f0b23519bb24019b3a070b2912ba5fc562d4311d4e41dc4390c1c86804598e96c39221273e02e736b49aad19003c4abc4e4eaa63ee8e80cd7d9a00019ea6e874d922776ca4a7c6b980051243928b05ac5d409e82a7dff69d25d84b725a0b7fb6179b7b" alt="image.png" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloudfront
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Create CloudFront distribution&lt;/li&gt;
&lt;li&gt; Link S3 bucket - Use as website endpoint&lt;/li&gt;
&lt;li&gt; Redirect HTTP to HTTPS.&lt;/li&gt;
&lt;li&gt; Link the certificate you created in ACM.&lt;/li&gt;
&lt;li&gt; Once created go into settings and add alternate name &lt;em&gt;yourdomainname.com&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-wT5w2QTP26%2Fdf316ef140a48b0f2c9718ecdecdba95b2ff4e0cffbba0c3ce634d8fdab065b0bac4760fd6c32baba0d80700c338c1459f2bb221779f74869c764485d938b12f9db7daa7e71131c7d41343e8a1fa7f79254a528990a7070888a1ddde6ec304b47c0ab03d" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-wT5w2QTP26%2Fdf316ef140a48b0f2c9718ecdecdba95b2ff4e0cffbba0c3ce634d8fdab065b0bac4760fd6c32baba0d80700c338c1459f2bb221779f74869c764485d938b12f9db7daa7e71131c7d41343e8a1fa7f79254a528990a7070888a1ddde6ec304b47c0ab03d" alt="image.png" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally go back to Route53 and create an A record aliased to your cloudfront distribution.&lt;/p&gt;




&lt;h3&gt;
  
  
  Lambda (Optional)
&lt;/h3&gt;

&lt;p&gt;Create a lambda function to automatically invalidate (Refresh) your cloudfront cache whenever there is a change in your S3 bucket. This is useful regardless of what you use the static site for but particularly so if youre hosting a blog where you’re regularly changing the site.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create lambda function with your your S3 bucket as the trigger. More specifically ‘all object create events’ and ‘all object delete events’&lt;/li&gt;
&lt;li&gt; Go to IAM and find the role created and provide it access to S3 and Cloudfront.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require('aws-sdk');
const cloudfront = new AWS.CloudFront();
const DISTRIBUTION\_ID = '&amp;lt;yourcloudfrontdistributionid';
exports.handler = async (event) =&amp;gt; {
    console.log('Event:', JSON.stringify(event, null, 2));
    const params = {
        DistributionId: DISTRIBUTION\_ID,
        InvalidationBatch: {
            Paths: {
                Quantity: 1,
                Items: \['/\*'\], // Invalidates all object in CloudFront
            },
            CallerReference: \`${Date.now()}\`,
        },
    };
    try {
        const data = await cloudfront.createInvalidation(params).promise();
        console.log('Invalidation request sent:', data);
        return {
            statusCode: 200,
            body: JSON.stringify({
                message: 'Invalidation request submitted successfully.',
                invalidationId: data.Invalidation.Id
            }),
        };
    } catch (err) {
        console.error('Error invalidating cache:', err);
        return {
            statusCode: 500,
            body: JSON.stringify({
                message: 'Error invalidating cache.',
                error: err.message
            }),
        };
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;Overall, Setting up a static site on AWS using S3 provides a simple, robust and efficient hosting solution. This configuration ensures high availability and fast content delivery through CloudFront's global network, while ACM secures your site with HTTPS. Automating cache invalidation with a Lambda function keeps your content up-to-date without manual intervention, making it ideal for frequently updated sites. This may not be the most complex of set-ups in AWS, but a useful one nonetheless.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>development</category>
    </item>
    <item>
      <title>Building a Full-Stack Application with Docker Compose</title>
      <dc:creator>Nikolai Main</dc:creator>
      <pubDate>Sat, 19 Oct 2024 09:25:59 +0000</pubDate>
      <link>https://dev.to/neakoh/building-a-full-stack-application-with-docker-compose-h58</link>
      <guid>https://dev.to/neakoh/building-a-full-stack-application-with-docker-compose-h58</guid>
      <description>&lt;p&gt;In this project, I created a full-stack container setup using Docker Compose, an orchestration tool that allows you to define multi-container deployments using YAML files. This not only highlighted Docker Compose's utility but also introduced me to Express and React.&lt;/p&gt;

&lt;p&gt;I defined a frontend application built with React, a backend API with Express.js, and a PostgreSQL database. While I haven't fully grasped how this setup is beneficial in a production environment, it's brilliant for developing an application.&lt;/p&gt;

&lt;p&gt;For context, The application is a company directory that provides create, read, update, and delete functionality for editing personnel, departments, and locations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkkccpy1780gxwwd8qzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkkccpy1780gxwwd8qzh.png" alt="Docker Compose" width="793" height="295"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Tech Used
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Docker&lt;/li&gt;
&lt;li&gt;  Postgres&lt;/li&gt;
&lt;li&gt;  Javascript

&lt;ul&gt;
&lt;li&gt;  Express.js&lt;/li&gt;
&lt;li&gt;  React&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Useful Commands
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;General:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;docker logs &amp;lt;container name&amp;gt;&lt;/code&gt; Display current logs for a given container.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-f: "follow"&lt;/code&gt; print logs as they are received.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Compose Specific:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;docker compose up/down&lt;/code&gt; Run/Stop the Docker compose file&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-v:&lt;/code&gt; used with compose down - destroys volume as well&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;--build&lt;/code&gt; used with compose up - rebuilds containers&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker compose watch&lt;/code&gt; build containers in 'watch mode'&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Database
&lt;/h2&gt;

&lt;p&gt;I first created a database with PostgreSQL which turned out to be a relatively straightforward process. &lt;/p&gt;

&lt;p&gt;Relevant notes with regards to the volume section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; The first line defines a &lt;strong&gt;volume&lt;/strong&gt; - a location to store persistent data. If a volume with the name &lt;strong&gt;db_data&lt;/strong&gt; doesn't exist, It will be created and any data written to /var/lib/postgresql/data - the location within the container that stores PostgreSQL data - is then written to the db_data volume.&lt;/li&gt;
&lt;li&gt; The second line defines the location of an sql.init file, which contains a set of initial queries to populate the database.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  db:

    image: postgres:latest

    container_name: my-postgres-db

    environment:

      POSTGRES_USER: myuser

      POSTGRES_PASSWORD: mypassword

      POSTGRES_DB: mydatabase

    ports:

      - "5433:5432"

    volumes:

      - db_data:/var/lib/postgresql/data

      - ./database/init:/docker-entrypoint-initdb.d/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make sure everything loaded correctly, I used &lt;code&gt;docker logs my-postgres-db&lt;/code&gt; to check the container logs. Once I saw that there were no errors, I logged into the database with &lt;strong&gt;psql&lt;/strong&gt; to verify everything was there and ran a few SELECT queries to confirm.&lt;/p&gt;




&lt;h2&gt;
  
  
  Backend
&lt;/h2&gt;

&lt;p&gt;Next, I created the backend with Express.js. Although it's a relatively new topic for me, it was easy enough to pick up at a basic level. My Compose service looked like this:&lt;/p&gt;

&lt;p&gt;Relevant notes: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The &lt;strong&gt;context&lt;/strong&gt; section of the service tells Docker where to find the Dockerfile necessary in building the container image.&lt;/li&gt;
&lt;li&gt;  I used Compose's &lt;strong&gt;watch&lt;/strong&gt; feature, which is similar to nodemon for Node.js applications - It automatically rebuilds the container when code changes are detected, which is very useful in development.&lt;/li&gt;
&lt;li&gt;  The two actions defined in the watch section check for any changes to my index.js file which contains all of my http methods and my package.json file to check if any new modules are installed.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    build:

      context: ./api

    container_name: my-express-app

    ports:

      - "4000:4000"

    depends_on:

      - db

    develop:

      watch:

        - action: rebuild

          path: ./api

          target: index.js

          ignore:

            - node_modules/

        - action: rebuild

          path: package.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Dockerfile for the Express.js app looks like this. In essence, it grabs all the relevant content from my directory, installs necessary dependencies, and, once running, executes node index.js to start the Express.js server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:18

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD ["node", "index.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the database and a means to communicate with it set up, I began creating the ~12 methods required to make the application function correctly. To aid in testing the API's functionality, I opened two command lines: one to run &lt;strong&gt;curl&lt;/strong&gt; commands and another to view the &lt;strong&gt;container logs&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Frontend
&lt;/h2&gt;

&lt;p&gt;Having defined the backend and the database, all that was left was the frontend. I used React for this, mainly beacuse I had heard of it, and was aware of how common it is in modern frontend development, but had never experimented with it. Very quickly I realized how easy the whole process is and how useful it is for development. Running &lt;code&gt;npx create-react-app my-app&lt;/code&gt; gets you up and running with a basic template and a server within minutes.&lt;/p&gt;

&lt;p&gt;Initially, I planned to define the whole environment with Docker, but it was just as convenient to keep using React separately. Had I used Docker however, I would have defined a similar compose service to my API where I used watch statements to automate change refresh.&lt;/p&gt;

&lt;p&gt;Nevertheless, I created the frontend - a very simple UI with a series of controls to switch between several tables in the database, as well as provide CRUD functionality.&lt;/p&gt;

&lt;p&gt;The Dockerfile for my frontend looks like this:&lt;/p&gt;

&lt;p&gt;Relevant notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  This Dockerfile consists of two stages. First, the build stage, where the application image is compiled with the necessary components. Second, creating the image that will host the application with Nginx.&lt;/li&gt;
&lt;li&gt;  The reason for doing it in two stages is that it results in a smaller final image, as it only contains the server configuration and static files.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:18 AS build

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

# Stage 2: Serve the application with Nginx

FROM nginx:alpine

COPY --from=build /app/build /usr/share/nginx/html

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For completion sake, I did add the frontend to my Compose file, allowing the full-stack application to be deployed with a single command. Due to network constraints, I never went as far as making this accessible from the internet, but it is an equally simple process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    build:

      context: ./frontend

    container_name: frontend-app

    ports:

      - "80:80"

    depends_on:

      - api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;While I can't see this setup being too useful in a production environment, it's certainly useful for development. Having the ability to spin up and tear down your entire development environment with a single command is incredibly useful. I'm yet to really dip my feet into Kubernetes, but from my limited research, I imagine Kubernetes is a much better option in providing production environments for multi-container deployments.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Docker</title>
      <dc:creator>Nikolai Main</dc:creator>
      <pubDate>Wed, 09 Oct 2024 19:03:05 +0000</pubDate>
      <link>https://dev.to/neakoh/understanding-docker-28jg</link>
      <guid>https://dev.to/neakoh/understanding-docker-28jg</guid>
      <description>&lt;p&gt;Docker is a powerful containerization service that enables the running of multiple applications on the same operating system, each in logically isolated environments. This isolation allows applications to operate independently without interfering with one another, ensuring that they have their own dependencies, libraries, and configurations. This architecture provides developers with numerous benefits, including:&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Benefits of Docker
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistency&lt;/strong&gt;: Applications run in a pre-defined environment specified in your image configuration, maintaining uniformity across development, testing, and production stages. This significantly reduces the "it works on my machine" problem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Applications can be easily scaled by adding or removing containers as needed. In modern microservices architecture, this is particularly beneficial, as individual components can scale independently based on demand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CI/CD Integration&lt;/strong&gt;: The ease of stopping and starting containers facilitates rapid deployment and seamless integration into Continuous Integration/Continuous Deployment (CI/CD) pipelines, allowing for faster iterations and updates to applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker provides several features covering image creation, storage options, and container orchestration, all of which will be discussed in this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Image Creation
&lt;/h2&gt;

&lt;p&gt;A Dockerfile contains the instructions that tell Docker how to build your container. An example of a Dockerfile may look like this:&lt;br&gt;
&lt;code&gt;FROM node:18-alpine WORKDIR /appCOPY . . RUN CMD EXPOSE 3000&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Breakdown of the Dockerfile
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;FROM node:18-alpine&lt;/strong&gt;: This defines the base image for your application. It uses a Node.js image with Node.js pre-installed, version 18, based on the Alpine Linux distribution, which provides a smaller image size.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;WORKDIR /app&lt;/strong&gt;: This sets the working directory inside the container to /app. If the directory does not exist, Docker will create it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;COPY . .&lt;/strong&gt;: This command copies all files from your current directory on the host into the /app directory in the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RUN&lt;/strong&gt; : This runs any necessary commands during the container build process, such as installing dependencies or compiling assets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CMD&lt;/strong&gt; : This defines the command that will be executed when the container starts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EXPOSE 3000&lt;/strong&gt;: This declares that the application will be listening on port 3000, allowing other services or users to connect to it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Building the Docker Image
&lt;/h3&gt;

&lt;p&gt;To build the Docker image, use the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t  .&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;-t&lt;/strong&gt;: This tags your image with a name, making it easier to reference later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;.&lt;/strong&gt;: This tells Docker to look for the Dockerfile in the current directory.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Running the Docker Container
&lt;/h3&gt;

&lt;p&gt;To run the container, use the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -dp 127.0.0.1:3000:3000&lt;/code&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;-d&lt;/strong&gt;: This runs the container in detached mode, allowing it to run in the background.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;-p&lt;/strong&gt;: This creates a port mapping between the host and the container in the format HOST:CONTAINER, publishing the container’s port 3000 to 127.0.0.1:3000 on the host.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Stopping and Removing Containers
&lt;/h3&gt;

&lt;p&gt;Once you are finished with your application, you can either shut it down from the Docker UI or use the following commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;docker ps&lt;/strong&gt;: Lists all running containers (note the ID of the container you wish to stop).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;docker stop &lt;/strong&gt; : Stops the specified container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;docker rm &lt;/strong&gt; : Removes the specified container.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Storage
&lt;/h2&gt;

&lt;p&gt;Due to the ephemeral nature of containers, all data within a container is lost once it is shut down. This can be problematic for applications that require persistent data, such as databases. Docker offers two types of storage options to address this issue:&lt;/p&gt;
&lt;h3&gt;
  
  
  Volumes
&lt;/h3&gt;

&lt;p&gt;Volumes are fully managed by Docker and are isolated from the host filesystem, making them inaccessible from the host unless explicitly mounted. They also provide better performance than bind mounts.Creating and mounting a volume is straightforward and involves two commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker volume create docker run -dp  --mount type=volume,src=,target=&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Breaking Down the Docker Run Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;type&lt;/strong&gt;: Can be volume or bind.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;src&lt;/strong&gt;: The name of the volume you created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;target&lt;/strong&gt;: Where the volume will be mounted. Any data written to this path will be saved in the volume.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Bind Mounts
&lt;/h3&gt;

&lt;p&gt;Mounting a bind is also simple. The following command is used:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -dp  --mount type=bind,src=,target=&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this command, the mount type is “bind,” and the source is a directory path instead of a volume name.As changes are made in the target destination, those changes will be recorded almost instantly in the source destination/bind directory. This functionality can be paired with development tools like Nodemon to listen for changes in the development environment and automatically restart the server it’s hosting.&lt;/p&gt;
&lt;h2&gt;
  
  
  Orchestration
&lt;/h2&gt;

&lt;p&gt;Another feature Docker offers is container orchestration, which manages multi-container applications. In modern development, it’s common to see applications split into several “microservices,” which helps to decouple an otherwise large and cumbersome application.&lt;/p&gt;
&lt;h3&gt;
  
  
  Benefits of Microservices Architecture
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Single Point of Failure&lt;/strong&gt;: If one component of your application fails, the rest can continue running, enhancing the overall reliability of the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Better Organization&lt;/strong&gt;: Microservices help organize your infrastructure better, making it easier to manage and scale individual components.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Docker Compose
&lt;/h3&gt;

&lt;p&gt;Docker provides orchestration capabilities through Docker Compose, which uses YAML files to define your containers and any storage options. A typical Compose file might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'
services:
  app:
    image: node:18-alpine
    command: sh -c "yarn install &amp;amp;&amp;amp; yarn run dev"
    ports:
      - 127.0.0.1:3000:3000
    working_dir: /app
    volumes:
      - ./:/app
    environment:
      MYSQL_HOST: mysql
      MYSQL_USER: root
      MYSQL_PASSWORD: secret
      MYSQL_DB: todos

  mysql:
    image: mysql:8.0
    volumes:
      - todo-mysql-data:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: todos

volumes:
  todo-mysql-data:   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Breaking Down the Compose File&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;services&lt;/strong&gt;: These are the containers you wish to deploy. In the example above, two containers are declared: one for the application and one for the database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;volumes&lt;/strong&gt;: This defines the persistent storage used to capture database changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have you’ve defined your compose file, you start every thing with:&lt;br&gt;
&lt;code&gt;docker compose up&lt;/code&gt;&lt;br&gt;
And similarly, shutting everything down with:&lt;br&gt;
&lt;code&gt;docker compose down&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Features of Docker Compose
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Watch&lt;/strong&gt;: Similar to bind mounts, Compose will ‘listen’ for changes in your application and update it as you make those changes.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  web:
    build: .
    command: npm start
    develop:
      watch:
        - action: sync
          path: ./web
          target: /src/web
          ignore:
            - node_modules/
        - action: rebuild
          path: package.json        path: package.json 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the example above, whenever a change is detected in the /web directory, Compose write the change to the target directory at /src/web. Once everything has been copied the application is updated.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Secrets&lt;/strong&gt;: To adhere to security best practices, you should never reveal secrets in plain text. Compose allows you to specify a secrets file and reference it instead of explicitly stating sensitive information in your configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;Docker is a powerful tool that simplifies the development, deployment, and management of applications through containerization. By leveraging Docker's features, developers can create consistent environments, scale applications efficiently, and integrate seamlessly into CI/CD workflows. Whether you're working on a small project or a large-scale microservices architecture, Docker provides the tools you need to streamline your development process.&lt;/p&gt;

</description>
      <category>docker</category>
    </item>
    <item>
      <title>Optimizing AWS Infrastructure Deployment: Terraform, Sentinel, and CI/CD Best Practices</title>
      <dc:creator>Nikolai Main</dc:creator>
      <pubDate>Sun, 06 Oct 2024 11:00:39 +0000</pubDate>
      <link>https://dev.to/neakoh/optimizing-aws-infrastructure-deployment-terraform-sentinel-and-cicd-best-practices-3lmi</link>
      <guid>https://dev.to/neakoh/optimizing-aws-infrastructure-deployment-terraform-sentinel-and-cicd-best-practices-3lmi</guid>
      <description>&lt;p&gt;This project follows on from a previous post where I built AWS infrastructure solely in the AWS console. In it i cover the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Centralized Terraform state management&lt;/li&gt;
&lt;li&gt;  Terraform code validation with Sentinel&lt;/li&gt;
&lt;li&gt;  CI/CD Pipeline deployment&lt;/li&gt;
&lt;li&gt;  AWS Infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Less focus is placed on actual application design but may be covered in a later post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;In my initial project, I spent about an hour building the infrastructure and quickly realized how easy it is to make even minor mistakes that can lead to system failures. This often resulted in spending an additional 10 minutes here and there, sifting through each component to identify the error.&lt;/p&gt;

&lt;p&gt;Recognizing this challenge in a relatively small project made me acutely aware of the potential headaches that could arise when managing larger systems.&lt;/p&gt;

&lt;p&gt;To address this issue, I turned to Terraform. I dedicated a similar amount of time — approximately 1-2 hours — to define my infrastructure. However, the benefits were substantial: instead of spending 1-2 hours each time I needed to deploy, I can now get my entire infrastructure up and running in about 10 minutes, with a comparable teardown time.&lt;/p&gt;

&lt;p&gt;This improvement effectively reduced my deployment time by approximately 50 minutes. Additionally, I can confidently assert that my application and infrastructure are secure, thanks to the comprehensive scans conducted prior to deployment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Infrastructure Validation: My infrastructure is validated and checked with Sentinel in my cloud workspace. If any misconfigurations—such as poor naming and tagging conventions, overly permissive IAM policies, or insecure VPC designs—are present in my Infrastructure as Code (IaC), the run will fail, and I will be notified of the necessary changes.&lt;/li&gt;
&lt;li&gt;  Application Security Scans: For my application, I utilize GitLab's built-in suite of security tools to scan for code and dependency vulnerabilities, as well as exposed secrets. If GitLab isn’t an option, there are several other security scanning tools available, such as CodeQL, SonarQube, and Trivy. Once the application image is built, it undergoes an additional scan with Trivy to ensure its security.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Infrastructure Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-OgiOHqaP0m%2Fd5ffef614b7e4932ef7ae6ee173fffe874f68a2d101cc966ae451aa7a71893aeb9b07bcdb065f8ce5de24e3329d88cb3e3aed9f0e563b33d265aaeddedafd9cdcb907a2d761d2a61b4118d330db41b5800703d6d1c9544c599c962041095fd5637f28014" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-OgiOHqaP0m%2Fd5ffef614b7e4932ef7ae6ee173fffe874f68a2d101cc966ae451aa7a71893aeb9b07bcdb065f8ce5de24e3329d88cb3e3aed9f0e563b33d265aaeddedafd9cdcb907a2d761d2a61b4118d330db41b5800703d6d1c9544c599c962041095fd5637f28014" alt="Blank diagram (1).jpeg" width="800" height="781"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend Infrastructure (Repo 1)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  ECR (Elastic Container Registry)&lt;/li&gt;
&lt;li&gt;  ECS (Elastic Container Service)&lt;/li&gt;
&lt;li&gt;  Application Load Balancer&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backend Infrastructure (Repo 2)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  VPC (Virtual Private Cloud)&lt;/li&gt;
&lt;li&gt;  RDS (Relational Database Service)&lt;/li&gt;
&lt;li&gt;  API Gateway&lt;/li&gt;
&lt;li&gt;  AWS Lambda&lt;/li&gt;
&lt;li&gt;  Secrets Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security Checks and Scans
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pipeline Scans
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Secret Detection&lt;/li&gt;
&lt;li&gt; SAST (Static Application Security Testing) Scanning&lt;/li&gt;
&lt;li&gt; Dependency Scanning&lt;/li&gt;
&lt;li&gt; SCA (Software Composition Analysis) Scanning&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Sentinel Scans
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Appropriate IAM Permissions&lt;/li&gt;
&lt;li&gt; General Configuration Checks&lt;/li&gt;
&lt;li&gt; VPC Traffic Flows&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Deployment Workflow Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-pvMkObDNss%2F6ce892bc895ee8d6ee8efc755855efa77642f3b26d32ffaae26273c51a9effd390e4aae84a2ae7c2f63af9fefb518a05fe2c14794514601ff48acb2f6b35894e5ff372c30e2c3c292fd3ec6f6a7a26b2efa680867d1f5eb1651d3b6f2c6850a54e8250c1" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcodahosted.io%2Fdocs%2FFWvqC2buDc%2Fblobs%2Fbl-pvMkObDNss%2F6ce892bc895ee8d6ee8efc755855efa77642f3b26d32ffaae26273c51a9effd390e4aae84a2ae7c2f63af9fefb518a05fe2c14794514601ff48acb2f6b35894e5ff372c30e2c3c292fd3ec6f6a7a26b2efa680867d1f5eb1651d3b6f2c6850a54e8250c1" alt="Blank diagram (1).jpeg" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow 1 (Backend Configuration)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Backend code is pushed to GitLab.&lt;/li&gt;
&lt;li&gt; Terraform run triggered in cloud workspace.&lt;/li&gt;
&lt;li&gt; Sentinel policies check code for misconfigurations

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;VPC&lt;/strong&gt;: Naming conventions and private subnet config&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Security Groups&lt;/strong&gt;: Only allowing traffic over necessary ports&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Lambda&lt;/strong&gt;: IAM permissions and general config&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;RDS&lt;/strong&gt;: Check for encryption, Public Accessibility and Default credentials.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Secrets Manager&lt;/strong&gt;: Checks for secret rotation and read replicas&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; Upon validation infrastructure can be applied. Note relevant outputs.

&lt;ul&gt;
&lt;li&gt; RDS Endpoint + Secret Name are needed for Lamdba to work in this project. (I later came back to this and retrieved those outputs dynamically from within the Lambda function)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Example Sentinel Policy - VPC Checks
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import "tfplan/v2" as tfplan
import "tfrun" as run
import "strings"

// Define variables

messages = \[\]
resource = "VPC"

// Define main function
checks = func() {
  if run.is\_destroy == true {
    return true
  }

  // Retrieve resource info
  vpc = filter tfplan.resource\_changes as \_, rc {
    rc.mode is "managed" and
    rc.type is "aws\_vpc"
  }
  subnet = filter tfplan.resource\_changes as \_, rc {
    rc.mode is "managed" and
    rc.type is "aws\_subnet"
  }

  // Checking if resource exists.
  if length(vpc) == 0 {
    append(messages, "No vpc found.")
  }
  if length(subnet) == 0 {
    append(messages, "No subnets found.")
  }

  // Iterate over subnets
  for subnet as address, subnet {
    // Check number of available addresses
    if int(strings.split(subnet.change.after.cidr\_block, "/")\[1\]) &amp;lt; 24{
      append(messages, (subnet.address + " CIDR prefix too large. Must be at least 24."))
    }
    if(strings.has\_prefix(subnet.address, "aws\_subnet.private")){

      // Check subnet CIDR block
      if subnet.change.after.cidr\_block == "0.0.0.0/0"{
        append(messages, "Subnet not private. Edit CIDR block")
      }

      // Check if subnet has a public IP enabled.
      if subnet.change.after.map\_public\_ip\_on\_launch == true{
       append(messages, "Subnet not private. Public IP enabled")
      }
    }
  }

  // Run VPC checks
  for vpc as address, vpc {

    // Check if requires\_compatibilities is set and includes "FARGATE"
    requires\_name = vpc.change.after.tags else \[\]

    // Check VPC name/tags
    if length(requires\_name) == 0 or requires\_name.Name == "main-vpc"{
      append(messages, "VPC must follow proper naming conventions. Current name: " + requires\_name.Name)
    }
  }

  // Checking if any error messages have been produced
  // If messages is empty, the policy returns True and passes.
  if length(messages) != 0 {
    print(resource + " misconfigurations:")
    counter = 1
   for messages as message{
     print(string(counter) + ". " + message)
      counter += 1
    }
    return false
  }
  return true
}

// Main rule
main = rule {
   checks()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Workflow 2 (Frontend Configuration)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Application code is developed on local machine and pushed to Gitlab.&lt;/li&gt;
&lt;li&gt; Pipeline is trigged (More details below)

&lt;ul&gt;
&lt;li&gt; Scan application code&lt;/li&gt;
&lt;li&gt; Build image, scan and push to ECR&lt;/li&gt;
&lt;li&gt; Retrieve relevant outputs from backend infrastructure&lt;/li&gt;
&lt;li&gt; Create TF_vars file and push back to GitLab&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; 2nd Terraform workspace triggered by push to repo w/ tag&lt;/li&gt;
&lt;li&gt; Similar plan &amp;gt; sentinel scan &amp;gt; apply process takes place.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  GitLab Pipeline
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Stage 1: Test - SAST, Dependency, Secrets etc..&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: docker:latest
services:
- docker:dind
variables:
  DOCKER\_HOST: tcp://docker:2375/
  DOCKER\_DRIVER: overlay2
  REPO\_NAME: gitlab-cicd

// Declaring the required GitLab scans.
include:
  - template: Jobs/Dependency-Scanning.gitlab-ci.yml
  - template: Jobs/SAST.gitlab-ci.yml
  - template: Jobs/Secret-Detection.gitlab-ci.yml

// All included templates run during 'test' stage.
stages:
  - test
  - build-image
  - fetch-terraform-outputs
  - update-terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Stage 2: Build, Scan, Push&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build:
  stage: build-image
  before\_script:
  - apk add --no-cache aws-cli
  - apk add --no-cache curl
  script:

  // Building Docker image
  - echo "Building Docker image..."
  - docker build -t $REPO\_NAME:latest .

  // Scanning Docker image with Trivy
  - echo "Running Trivy scan on Docker image"
  - curl -sSL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh
    | sh -
  - export PATH=$PATH:$(pwd)/bin
  - trivy image --exit-code 0 --severity HIGH,CRITICAL $REPO\_NAME:latest || true
  - trivy image --format json --output trivy-results.json $REPO\_NAME:latest

  # Retrieving ECR repo credentials
  - echo "Logging in to Amazon ECR..."
  - aws ecr get-login-password --region $AWS\_DEFAULT\_REGION | docker login --username
    AWS --password-stdin $AWS\_ACCOUNT\_ID.dkr.ecr.$AWS\_DEFAULT\_REGION.amazonaws.com

  # Pushing Docker image to ECR
  - echo "Pushing Docker image to ECR..."
  - TIMESTAMP=$(date +%Y%m%d%H%M%S)
  - IMAGE\_TAG="$REPO\_NAME:$TIMESTAMP"
  - docker tag $REPO\_NAME:latest $AWS\_ACCOUNT\_ID.dkr.ecr.$AWS\_DEFAULT\_REGION.amazonaws.com/$IMAGE\_TAG
  - docker push $AWS\_ACCOUNT\_ID.dkr.ecr.$AWS\_DEFAULT\_REGION.amazonaws.com/$IMAGE\_TAG
  - echo "TF\_VAR\_image\_uri=$AWS\_ACCOUNT\_ID.dkr.ecr.$AWS\_DEFAULT\_REGION.amazonaws.com/$IMAGE\_TAG"
    &amp;gt;&amp;gt; build.env
  artifacts:
    paths:
    - build.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Stage 3: Fetch TF outputs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fetch-terraform-outputs:
  stage: fetch-terraform-outputs
  image: alpine:latest
  script:
  - apk add --no-cache curl jq
  - echo "Creating variables for specific outputs..."

  // Retrieving outputs via Terraform Cloud API
  - "curl -s -X GET \\\\\\n  \\"https://app.terraform.io/api/v2/workspaces/${HCP\_WORKSPACE\_ID}/current-state-version-outputs\\"
    \\\\\\n -H \\"Authorization: Bearer ${HCP\_TOKEN}\\" \\\\\\n  -H
    'Content-Type: application/vnd.api+json' | \\\\\\njq -r '.data\[\] | select(.attributes.name

    // Saving outputs as environment variables to be passed to the next stage.
    | test(\\"public\_subnet\_ids|alb-sg-id|container-sg-id|vpc\_id\\")) | \\n  if .attributes.name
    == \\"public\_subnet\_ids\\" then\\n    \\"PUBLIC\_SUBNET\_IDS=\\\\(.attributes.value)\\"\\n
    \\ elif .attributes.name == \\"alb-sg-id\\" then\\n    \\"ALB\_SG\_ID=\\\\(.attributes.value)\\"\\n
    \\ elif .attributes.name == \\"container-sg-id\\" then\\n    \\"CONTAINER\_SG\_ID=\\\\(.attributes.value)\\"\\n
    \\ elif .attributes.name == \\"vpc\_id\\" then\\n    \\"VPC\_ID=\\\\(.attributes.value)\\"\\n
    \\ else\\n    empty\\n  end' &amp;gt; terraform\_outputs.env\\n"
  artifacts:
    reports:
      dotenv: terraform\_outputs.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Stage 4: Update main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;update-terraform:
  stage: update-terraform
  image: alpine:latest
  dependencies:
  - build
  - fetch-terraform-outputs
  before\_script:
  - apk add --no-cache git
  - git config --global user.email "${USER\_EMAIL}"
  - git config --global user.name "${USER\_NAME}"
  script:
  - echo "Contents of current directory:"
  - ls -la
  - echo "Contents of build.env:"
  - cat build.env || echo "build.env not found"
  - echo "Contents of terraform\_outputs.env:"
  - cat terraform\_outputs.env || echo "terraform\_outputs.env not found"
  - export $(cat build.env | xargs)
  - export $(cat terraform\_outputs.env | xargs)
  - echo "Cloning repository..."
  - git clone https://&amp;lt;username&amp;gt;:${GITLAB\_PAT}@gitlab.com/&amp;lt;project_id&amp;gt;/&amp;lt;repo.git&amp;gt; || exit
    1
  - cd Test

  // Create TF\_vars file
  - echo "Creating/Updating TF\_vars file..."
  - |
    cat &amp;lt;&amp;lt; EOF &amp;gt; terraform.tfvars
    image\_uri = "${TF\_VAR\_image\_uri}"
    public\_subnet\_ids = ${PUBLIC\_SUBNET\_IDS}
    alb\_sg\_id = "${ALB\_SG\_ID}"
    container\_sg\_id = "${CONTAINER\_SG\_ID}"
    vpc\_id = "${VPC\_ID}"
    EOF

  // Commit and push TF\_vars to repo
  - git add terraform.tfvars
  - git commit -m "Update image URI and Terraform outputs in TF\_vars \[ci skip\]" ||
    echo "No changes to commit"
  - TAG\_NAME="$(date +%Y.%m.%d-%H%M%S)"
  - echo "Creating a new tag $TAG\_NAME"

  // Creating a tag to trigger TF cloud only from pushes from this pipeline.
  // 'ci skip' tells the repo not to run the pipeline again on this push. 
  - git tag -a $TAG\_NAME -m "Release version $TAG\_NAME \[ci skip\]"
  - git push origin HEAD:main --tags || exit 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;In conclusion, I now have an end-to-end deployment solution that ensures my application is both secure and robust. This streamlined process has significantly reduced my mean time to deployment, allowing me to reallocate time and resources to other areas.&lt;/p&gt;

&lt;p&gt;By identifying potential issues much earlier in the deployment process, I can mitigate risks that previously led to delays and unnecessary costs. This proactive approach not only enhances the overall efficiency of our development cycle but also improves the quality of our releases.&lt;/p&gt;

&lt;p&gt;Looking ahead, I plan to involve additional security testing, Incorporate testing and production environments and integrate a monitoring tool such as Grafana.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>gitlab</category>
      <category>policy</category>
    </item>
    <item>
      <title>Connecting to an RDS instance via Bastion Host</title>
      <dc:creator>Nikolai Main</dc:creator>
      <pubDate>Fri, 27 Sep 2024 12:11:14 +0000</pubDate>
      <link>https://dev.to/neakoh/connecting-to-an-rds-instance-via-bastion-host-20mo</link>
      <guid>https://dev.to/neakoh/connecting-to-an-rds-instance-via-bastion-host-20mo</guid>
      <description>&lt;p&gt;In keeping with security best practices, databases should always remain private and isolated from the internet. However, for development purposes, accessing your database - such as an RDS instance - can be necessary. Connecting through a Bastion Host provides a secure way to establish this connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6hxqvhaojp3pht39c3k.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6hxqvhaojp3pht39c3k.jpeg" alt="Network diagram of Bastion connection" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;The tools required for this are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account.&lt;/li&gt;
&lt;li&gt;Powershell (or other command line shell)&lt;/li&gt;
&lt;li&gt;MySQL Workbench (or other visual database interaction software)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create a VPC
&lt;/h2&gt;

&lt;p&gt;Create a new VPC with both a private and public subnet and while in the VPC dashboard create 2 security groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One for your EC2 instance: Allow inbound on port 80 (SSH) from your IP.&lt;/li&gt;
&lt;li&gt;One for your RDS instance: Allow 3306 (MySQL) from the EC2 security group.
Go back into the EC2 group and allow outbound on 3306 to the RDS security group.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create an EC2 instance
&lt;/h2&gt;

&lt;p&gt;Free tier options are sufficient here. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure you create a new key pair if you dont already have one and download the .pem file. &lt;/li&gt;
&lt;li&gt;Place it in your public subnet, assign the security group you created earlier and ensure assign public ip is ticked.&lt;/li&gt;
&lt;li&gt;Take note of the DNS thats created.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create a new RDS instance
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Place it in your private subnet and assign it the RDS security group you created earlier. &lt;/li&gt;
&lt;li&gt;Once created take note of the DB endpoint and the provided credentials (once shown you wont be able to view them again.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Connect to your Database
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Open a new powershell and use the following command to create a port forwarding session, Replacing each value with your own. Once logged in keep this shell open.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i &amp;lt;path-to-your-key.pem&amp;gt; -L 3306:&amp;lt;rds-endpoint&amp;gt;:3306 &amp;lt;ec2-user@&amp;lt;ec2-public-dns&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open MySQL Workbench and create a new connection. Enter localhost as the hostname and 3306 as the port. Enter the database credentials and click ok. Click on your newly created connection and you should be in.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;By following these steps you have securely connected to your private RDS instance through a Bastion Host. This method keeps your database isolated from the internet, aligning with security best practices, while still providing the access needed for development purposes.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>rds</category>
      <category>devops</category>
    </item>
    <item>
      <title>Modern Directory Architecture: AWS Serverless with Containerized Frontend</title>
      <dc:creator>Nikolai Main</dc:creator>
      <pubDate>Thu, 26 Sep 2024 10:39:20 +0000</pubDate>
      <link>https://dev.to/neakoh/modern-directory-architecture-aws-serverless-with-containerized-frontend-573k</link>
      <guid>https://dev.to/neakoh/modern-directory-architecture-aws-serverless-with-containerized-frontend-573k</guid>
      <description>&lt;p&gt;The purpose of this project was to create a solution for a hypothetical company looking to host their directory application in the cloud. If needed to be private Cognito could be implemented to provide authentication flow (Covered in another post). The solution consists of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend hosted in &lt;strong&gt;ECS containers&lt;/strong&gt; behind an &lt;strong&gt;Application Load Balancer&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Backend hosted in &lt;strong&gt;RDS (MySQL)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Gateway&lt;/strong&gt; + &lt;strong&gt;Lambda&lt;/strong&gt; to facilitate communication between the frontend and backend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Secrets Manager&lt;/strong&gt; to store DB credentials&lt;/li&gt;
&lt;li&gt;Optionally connect to &lt;strong&gt;Route53&lt;/strong&gt; &amp;amp; &lt;strong&gt;Amazon Certificate Manager&lt;/strong&gt; (DNS &amp;amp; HTTPS flow covered in a another post)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g8sjp17rm89da2ykagt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g8sjp17rm89da2ykagt.png" alt="HLD of proposed architecture." width="800" height="712"&gt;&lt;/a&gt;&lt;em&gt;HLD of proposed architecture.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Flow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;User makes request the Application Load Balancer and subsequently routes to one of the containers.&lt;/li&gt;
&lt;li&gt;Users sends query requests through the application which gets passed to API gateway and subsequently Lambda.&lt;/li&gt;
&lt;li&gt;Lambda queries Secrets Manager to retrieve database secrets and shortly after queries the database to complete the user request.&lt;/li&gt;
&lt;li&gt;Data is sent back to the container/user.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;u&gt;Components&lt;/u&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  VPC
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a VPC with 2 private subnets. &lt;/li&gt;
&lt;li&gt;Create an Interface Endpoint for Secrets Manager&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Security Groups
&lt;/h3&gt;

&lt;p&gt;There's likely an optimal order of creating these but make all of them and add the rules after.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Inbound: 443 from secrets endpoint&lt;/li&gt;
&lt;li&gt;Outbound: 443 to Secrets Interface Endpoint SG, 3306 to RDS SG&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Containers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Inbound: 443 and/or 80 from ALB SG&lt;/li&gt;
&lt;li&gt;Outbound: All (or restrict to your needs)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ALB
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Inbound: 443 &amp;amp; 80 from 0.0.0.0 or defined range&lt;/li&gt;
&lt;li&gt;Outbound: 443 &amp;amp; 80 to containers SG&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Interface Endpoint
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Inbound: 443 from Lambda SG&lt;/li&gt;
&lt;li&gt;Outbound: None required&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  RDS
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Inbound: 3306 from Lambda SG&lt;/li&gt;
&lt;li&gt;Outbound: None&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lambda
&lt;/h3&gt;

&lt;p&gt;Due to the number of requests my application required, I created a single lambda_function (single lambda_handler and several imported sub-functions). As such, my lambda_function folder looked something like this:&lt;/p&gt;

&lt;p&gt;company_requests/&lt;br&gt;
├── pymysql/&lt;br&gt;
├── PyMySQL-1.1.1.dist-info/&lt;br&gt;
├── lambda_function.py&lt;br&gt;
├── deleteEmployee.py&lt;br&gt;
├── getAll.py&lt;br&gt;
├── getAllDepartments.py&lt;br&gt;
├── getAllLocations.py&lt;br&gt;
...&lt;/p&gt;

&lt;p&gt;Within your development environment (or within Lambda), Create a folder with all of your required Python functions for their associated requests and be sure to include any dependencies. Zip the folder (I used 7zip) and then upload it to Lambda.&lt;br&gt;
Heres a brief snippet of my lambda_function.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):

  execution_start_time = time.time()

  # Retrieving secrets from the function defined above.
  secret = get_secret()
  username = secret["username"]
  password = secret["password"]
  db_host = "&amp;lt;your-rds-endpoint"
  db_name = "&amp;lt;your-table-name"

  creds = {"user":username, "pass":password, "db_host":db_host, "db_name":db_name}

  # Determine which function to call based on the HTTP method and path
  http_method = event.get('httpMethod')
  resource = event.get('resource')

  # Personnel operations
  if resource == "/all" and http_method =="GET":
      return get_all(event, creds, execution_start_time)
  elif resource == "/personnel" and http_method == "GET":
      return get_all_personnel(event, creds, execution_start_time)
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Function to retrieve secrets from &lt;strong&gt;Secrets Manager&lt;/strong&gt; (also in lambda_function.py.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto
import json

def get_secret():
     secret_name = &amp;lt;your_secret_name&amp;gt;
     region_name = &amp;lt;your_region&amp;gt;

     # Create a Secrets Manager client
     session = boto3.session.Session()
     client = session.client(
         service_name='secretsmanager',
         region_name=region_name
     )

     try:
         get_secret_value_response = client.get_secret_value(
             SecretId=secret_name
         )
     except ClientError as e:
         # Handle exceptions
         raise e

     # Decrypts secret using the associated KMS key.
     secret = get_secret_value_response['SecretString']
     return json.loads(secret)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Once you've created your Lambda function, Go to IAM and assign it the following policies:

&lt;ol&gt;
&lt;li&gt;RDS read access - Creating an inline policy can allow for stronger enforcement of &lt;strong&gt;least privilege&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;EC2 Create Network Interface - Needed for placing Lambda inside your VPC.&lt;/li&gt;
&lt;li&gt;Secrets Manager read secret - Again create an inline policy only allowing access to the necessary secret.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Go into your Lambda function’s network configuration and place it inside the same subnet/s of your VPC that hold your RDS instances. Attach the security group you created earlier.&lt;/li&gt;

&lt;li&gt;You may need to increase the functions timeout to ensure it has time to complete requests. This can be done in general configuration.&lt;/li&gt;

&lt;/ol&gt;

&lt;h3&gt;
  
  
  API Gateway
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new API gateway with all of the required resource paths and methods. My resource tree looked something like:
/
/all
/departments
/locations
/personnel
-GET
-POST
...&lt;/li&gt;
&lt;li&gt;When creating each resource ensure Lambda integration is ticked on all of them and link your Lambda function.&lt;/li&gt;
&lt;li&gt;Make sure to enable CORS on each resource path too&lt;/li&gt;
&lt;li&gt;Once done, Deploy your api.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  ECR
&lt;/h3&gt;

&lt;p&gt;This step assumes you have already containerised your application (Steps on how to do so are covered in a different post).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new repository and follow the provided push steps&lt;/li&gt;
&lt;li&gt;Once uploaded take note of the image URI.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  ECS
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a task definition using the URI of the image you uploaded

&lt;ul&gt;
&lt;li&gt;Ensure you select the same ports used in your container.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create a service

&lt;ol&gt;
&lt;li&gt;Select the task definition you just made, Give the service a name and specify &lt;strong&gt;2 tasks&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Open the networking tab and select your VPC. Select your private subnets and assign the appropriate security group.&lt;/li&gt;
&lt;li&gt;Open the load balancing tab and create a new ALB with the containers as targets. &lt;/li&gt;
&lt;li&gt;After creating go to the load balancer settings in EC2 and assign the appropriate security group.&lt;/li&gt;
&lt;li&gt;If implementing ACM for HTTPS specify the certificate here and add HTTPS traffic to the container and ALB security groups.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;h3&gt;
  
  
  RDS
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new DB with whichever engine you require and place it in your private subnets.&lt;/li&gt;
&lt;li&gt;Assign it the appropriate security group.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Secrets Manager
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new secret store and save your RDS DB credentials there.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;After having to create around 12 API resource paths, not to mention all of the other componets, I realised how useful infrastructure as code can be in situation like this so in later posts I will cover defining this project (and likely others) in Terraform.&lt;/p&gt;

&lt;p&gt;Although this project achieves the intended outcome, I'm sure there are several tweaks that can be made to make it more streamline and improve security so if this post happens to catch any attention and you have a suggestion please do share. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>devsecops</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
