<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: GilHope</title>
    <description>The latest articles on DEV Community by GilHope (@gilhope).</description>
    <link>https://dev.to/gilhope</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gilhope"/>
    <language>en</language>
    <item>
      <title>DynamoDB and Lambda Triggers</title>
      <dc:creator>GilHope</dc:creator>
      <pubDate>Wed, 22 Nov 2023 18:04:34 +0000</pubDate>
      <link>https://dev.to/gilhope/dynamodb-and-lambda-triggers-3k7c</link>
      <guid>https://dev.to/gilhope/dynamodb-and-lambda-triggers-3k7c</guid>
      <description>&lt;h1&gt;
  
  
  Understanding DynamoDB Streams and Lambda Triggers
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Introduction to DynamoDB Streams
&lt;/h2&gt;

&lt;p&gt;DynamoDB Streams are a feature of AWS DynamoDB that provide a 24-hour rolling window of time-ordered changes to items in a DynamoDB table. These streams are essential for capturing and acting upon data modifications like inserts, updates, and deletes. They are configurable on a per-table basis and come with four distinct view types:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Keys Only&lt;/strong&gt;: Captures only the partition key and, if applicable, the sort key value of the changed item.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New Image&lt;/strong&gt;: Stores the complete state of the item as it appears after the change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Old Image&lt;/strong&gt;: Retains the item's state before the change, enabling a comparison with its current state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New and Old Images&lt;/strong&gt;: Provides a comprehensive view of the item's state both before and after the modification.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Integration with Lambda for Trigger Functionality
&lt;/h2&gt;

&lt;p&gt;Lambda functions can be integrated with DynamoDB Streams to create a powerful, event-driven architecture. This setup allows for automated actions in response to data changes in DynamoDB tables, a concept known as database triggers. This approach is particularly effective in serverless architectures, where it can be utilized for various purposes, including reporting, analytics, aggregation, messaging, or notifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Trigger Architecture Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Item Change&lt;/strong&gt;: An item change occurs in a DynamoDB table with streams enabled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stream Record&lt;/strong&gt;: This change generates a stream record that is added to the DynamoDB Stream.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda Invocation&lt;/strong&gt;: The Lambda function is then automatically invoked in response to the stream event. The function receives the data change as an event input, which can be based on any of the configured stream view types.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Power of DynamoDB Streams and Lambda Triggers
&lt;/h2&gt;

&lt;p&gt;The combination of DynamoDB Streams and AWS Lambda forms the backbone of a robust trigger architecture in DynamoDB. This setup exemplifies the event-driven paradigm, allowing for real-time, automated responses to data changes. It's particularly valuable for applications that require immediate action based on data modifications, such as real-time analytics, user notifications, or data synchronization across systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exam Perspective
&lt;/h2&gt;

&lt;p&gt;From an examination standpoint, it's crucial to understand the relationship between DynamoDB Streams and Lambda triggers. These components work together to implement a dynamic and responsive trigger architecture in DynamoDB, offering a versatile solution for various real-world scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;DynamoDB Streams, combined with Lambda triggers, offer a powerful way to build responsive, event-driven architectures. This technology allows for efficient monitoring and handling of data changes, making it an invaluable tool in the AWS ecosystem. Whether for analytics, reporting, or real-time data processing, understanding and implementing this setup can significantly enhance the capabilities of your DynamoDB-based applications.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Introduction to Lambda</title>
      <dc:creator>GilHope</dc:creator>
      <pubDate>Wed, 22 Nov 2023 16:38:09 +0000</pubDate>
      <link>https://dev.to/gilhope/introduction-to-lambda-io1</link>
      <guid>https://dev.to/gilhope/introduction-to-lambda-io1</guid>
      <description>&lt;h1&gt;
  
  
  Introduction to AWS Lambda
&lt;/h1&gt;

&lt;p&gt;AWS Lambda is a powerful and versatile service offered by Amazon Web Services (AWS). In this comprehensive guide, we will explore various aspects of AWS Lambda, from its core concepts to advanced features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding AWS Lambda
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lambda as a FaaS Product
&lt;/h3&gt;

&lt;p&gt;AWS Lambda falls under the category of Function-as-a-Service (FaaS) products. In simple terms, it allows you to provide specialized, short-running, and focused code to Lambda, and it takes care of running it while billing you only for the resources you consume.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda Functions and Runtimes
&lt;/h3&gt;

&lt;p&gt;A Lambda Function is a piece of code that Lambda runs. Every Lambda function is associated with a specific runtime. For example, you might use Python 3.8 as the runtime for your Lambda function. It's essential to define the runtime when creating a Lambda function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resource Allocation
&lt;/h3&gt;

&lt;p&gt;When you provide code to Lambda, it's loaded into and executed within a runtime environment. You need to define the amount of resources allocated to this environment, including memory and CPU. AWS Lambda charges you based on the duration that a function runs, making it a cost-effective choice for many use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Uses of AWS Lambda
&lt;/h2&gt;

&lt;p&gt;AWS Lambda is a versatile service that can be applied in various scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Serverless Applications&lt;/strong&gt;: Lambda forms a core part of serverless applications in AWS, enabling you to build scalable and cost-effective solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;File Processing&lt;/strong&gt;: Lambda can be used for processing files, making it suitable for tasks like data transformation or image processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database Triggers&lt;/strong&gt;: You can use Lambda functions as triggers for your databases, responding to changes in data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Serverless CRON Jobs&lt;/strong&gt;: Lambda can execute scheduled tasks, providing serverless alternatives to traditional CRON jobs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time Stream Data Processing&lt;/strong&gt;: AWS Lambda is ideal for processing real-time streaming data from sources like Amazon Kinesis.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Lambda Part 2: Networking
&lt;/h2&gt;

&lt;p&gt;Lambda offers two networking modes: public and VPC (Virtual Private Cloud) Networking. Public networking allows Lambda to access public AWS services and internet-based services, such as IMDb. It offers optimal performance but lacks access to VPC-based services without specific configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Private Lambda
&lt;/h3&gt;

&lt;p&gt;Lambda VPC Networking enables access to resources within your VPC but requires networking configurations for external access. Lambda functions running in a VPC adhere to VPC networking rules and cannot access public space services or the public internet directly. VPC endpoints and Nat Gateways can be used to provide necessary connectivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Lambda function security is managed through execution roles, which are IAM (Identity and Access Management) roles assumed by Lambda functions. Execution roles grant permissions to interact with other AWS services. Additionally, resource policies can be used to allow external accounts or services to invoke Lambda functions securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging and Monitoring
&lt;/h2&gt;

&lt;p&gt;AWS Lambda leverages CloudWatch, CloudWatch Logs, and X-Ray for logging and monitoring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudWatch Logs&lt;/strong&gt;: Logs generated during Lambda executions, including messages, errors, and execution duration, are stored here.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudWatch Metrics&lt;/strong&gt;: Metrics like invocations, success/failure counts, retries, and latency-related data are available in CloudWatch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;X-Ray&lt;/strong&gt;: Lambda can use X-Ray to add distributed tracing capabilities for improved monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Access to CloudWatch Logs requires appropriate permissions via the execution role.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Lambda Part 3: Invocation
&lt;/h2&gt;

&lt;p&gt;AWS Lambda functions can be invoked in three ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Synchronous Invocation&lt;/strong&gt;: Triggered directly through CLI commands or APIs, with results returned during the request. This method requires client-side handling of errors and retries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asynchronous Invocation&lt;/strong&gt;: Typically used when AWS services invoke Lambda functions on your behalf. Events are sent to Lambda for processing, and retries are managed by AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event Source Mappings&lt;/strong&gt;: Applied to streams or queues that don't generate events (e.g., Kinesis, DynamoDB, SQS). Lambda polls these sources for new data and processes batches of events.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Versions and Aliases
&lt;/h2&gt;

&lt;p&gt;Lambda functions have versions that encompass both code and configuration. Versions are immutable and have their own Amazon Resource Names (ARNs). You can create aliases that point to specific function versions, providing flexibility and control over which version is invoked.&lt;/p&gt;

&lt;h2&gt;
  
  
  In-Depth Insights
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Execution Context&lt;/strong&gt;: Lambda functions run within an execution context. A "cold start" occurs during the initial creation and configuration of this context, which may take time. Subsequent invocations within a short time benefit from a "warm start," significantly reducing launch time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provisioned Concurrency&lt;/strong&gt;: To address the cold start issue when multiple invocations are needed simultaneously, AWS offers Provisioned Concurrency. It allows you to keep execution contexts "warm" for faster starts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS Lambda is a fundamental component of serverless computing in AWS. Understanding its core concepts, networking modes, security, invocation methods, and advanced features is essential for building efficient and cost-effective applications.&lt;/p&gt;

&lt;p&gt;Stay tuned for more in-depth articles on specific Lambda topics! AWS Lambda offers a world of possibilities for developers and organizations looking to leverage the power of serverless computing.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Orchestration of The Ring</title>
      <dc:creator>GilHope</dc:creator>
      <pubDate>Thu, 29 Jun 2023 15:07:08 +0000</pubDate>
      <link>https://dev.to/gilhope/the-orchestration-of-the-ring-31p8</link>
      <guid>https://dev.to/gilhope/the-orchestration-of-the-ring-31p8</guid>
      <description>&lt;center&gt;
In the land of Mordor,
&lt;br&gt;
&lt;br&gt;
In the fires of Mount Volume... 
&lt;br&gt;
&lt;br&gt;
The Dark Lord Sau..-Cron.. forged in secret a... &lt;br&gt;
&lt;br&gt;
uh... 
&lt;br&gt;
&lt;br&gt;
A container orchestration platform!!! 
&lt;br&gt;
&lt;br&gt;
Is he really going to do this again? 
&lt;/center&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yPD6UX5z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dgq2i4mqbdum9vymslgp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yPD6UX5z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dgq2i4mqbdum9vymslgp.gif" alt="Image description" width="500" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;
Yes.
&lt;br&gt;
&lt;br&gt; 
&lt;/center&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;In this blog, i'm going to go over a little bit about the basics of Kubernetes. If you haven't seen my previous Lord of the Rings flavored post on Docker then you can also check that out&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://dev.to/gilhope/the-containerization-of-the-ring-115b"&gt;HERE&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;What is Kubernetes?&lt;/li&gt;
&lt;li&gt;What are K8s key features?&lt;/li&gt;
&lt;li&gt;How is K8s architected from a bird’s eye view?&lt;/li&gt;
&lt;li&gt;The K8s architecture in more detail.&lt;/li&gt;
&lt;li&gt;Let's get started using K8s&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  &lt;a id="1"&gt;&lt;/a&gt;What is Kubernetes?
&lt;/h1&gt;

&lt;p&gt;Kubernetes is an open-source container orchestration platform and is currently the leading container orchestration tool so this is an important one to know. In essence, Kubernetes is designed to efficiently automate the deployment, scaling, and operation of containerized applications. It is also a vendor-neutral tool so it can run on all cloud providers.&lt;/p&gt;

&lt;p&gt;Just as the One Ring has the ability to wield power and influence over other ring bearers for Sauron, Kubernetes has the power to wield and orchestrate container systems, such as Docker, for you!&lt;/p&gt;

&lt;p&gt;Kubernetes, also known as K8s because of the number of letters between ‘K’ and ‘s’, is derived from the Greek word κυβερνήτης (kubernḗtēs), which means “pilot”. You can see this clearly illustrated in their logo design; and that is the essence of what Kubernetes does. &lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a id="2"&gt;&lt;/a&gt;What are K8s key features?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service Discovery:&lt;/strong&gt; Kubernetes offers a built-in service discovery mechanism by which different services within a cluster can discover each other dynamically with the assignment of DNS names.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage Orchestration:&lt;/strong&gt; With Kubernetes, users are provided robust storage orchestration capabilities which allow them to manage and mount regardless of whether it is local, network-attached, or cloud-specific storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Rollouts and Rollbacks:&lt;/strong&gt; With K8s, you can define the desired state of deployments while the ability to make seamless updates without downtime. And in the case of issues or failures, you can easily roll back to the previous desired state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate Bin Packing:&lt;/strong&gt; K8s can efficiently utilize resources by maximizing CPU utilization and optimizing resource allocation across nodes in your clusters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-Healing:&lt;/strong&gt; In order to maintain your desired state, K8s automatically can detect any failed containers that don't respond to health-checks and then either replace, kill, or restart them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret and Configuration Management:&lt;/strong&gt; K8s allows you to store secrets and configurations securely and while doing so also maintaining the ability to update them without the need to rebuild them your container images.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  &lt;a id="3"&gt;&lt;/a&gt;How is K8s architected from a bird’s eye view?
&lt;/h1&gt;

&lt;p&gt;Kubernetes utilizes what is known as a ‘cluster architecture’. For K8s, this is a highly available ‘cluster’ of compute resources which can be organized to work as one single unit. Within these clusters run nodes and within the nodes are pods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Control Plane:&lt;/strong&gt; The cluster is captained by the cluster Control Plane, aka the Master Node, and this is responsible for maintaining the desired state of the cluster. It manages things such as the scheduling, scaling, deploying, and the applications within the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Worker Nodes:&lt;/strong&gt; The Control Plane’s crew are the Worker Nodes, aka the Data Plane. Worker nodes are the virtual machines or physical servers that run your applications and workflows. Each worker nodes contains the the services necessary to manage and run what are known as ‘Pods’.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pods:&lt;/strong&gt; Pods are the smallest and simplest units within the K8s architecture. A pod encapsulates an application container(s), storage resources, a unique network IP, and the options that govern how the container should run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AH7SZBjA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f91ha1274u5dcavbn6wb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AH7SZBjA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f91ha1274u5dcavbn6wb.jpeg" alt="Image description" width="296" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployments:&lt;/strong&gt; Deployments in K8s allows users to define the desired state of their applications, including the number replicas, container images, and also features, such as rolling updates, rollbacks, scaling, and self-healing. Deployments are apart of the Control Plane and specifically are managed by the Deployment Controller which monitors and manages the deployments to ensure the desired and current state actually match.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a id="4"&gt;&lt;/a&gt;The K8s architecture in more detail.
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Control Plane
&lt;/h2&gt;

&lt;p&gt;The control plane is made up of a few key components, including: the API Server, Etcd, Scheduler, and Controllers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Server (kube-apiserver):&lt;/strong&gt; This is the front-end for the control plane and is what the user, management devices, and command-line interfaces all communicate with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Etcd:&lt;/strong&gt; This provides a highly available key value store within the cluster for storing the cluster state and configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduler (kube-scheduler):&lt;/strong&gt; The scheduler identifies any newly created pods without an assigned node and then selects a node for them to run on based on on resource requirements, deadlines, affinity/anti-affinity, data locality, and any constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Controller Manager (kube-controller-manager):&lt;/strong&gt; This manages the cluster controller processes which include the Node Controller, Replication Controller, Endpoints Controller, and Service Account &amp;amp; Token Controllers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node controller:&lt;/strong&gt; Monitors and responds to node outages. If a nodes fails to respond then it is marked ‘unhealthy’ and no longer schedules it work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Job/Cronjob controller:&lt;/strong&gt; The job controller tracks Job objects (one-off tasks/jobs) and the cronjob controller manages jobs set to run at specific times or intervals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Endpoint controller:&lt;/strong&gt; Populates the endpoint object used for network communications between services within the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Account and Token Controller:&lt;/strong&gt; These create default accounts and API access tokens for new namespaces.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replication Controller:&lt;/strong&gt; Ensures the desired number of pod replicas are running at any given time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Controller:&lt;/strong&gt; Manages the lifecycle of deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cloud Controller Manager (cloud-controller-manager):&lt;/strong&gt; This manages the cloud-specific control logic which enables the communication with the underlying cloud services through their APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LOtfX_JK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dlqunsjn7xaux5lvyt5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LOtfX_JK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dlqunsjn7xaux5lvyt5o.png" alt="Image description" width="616" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Worker Nodes
&lt;/h2&gt;

&lt;p&gt;Within the Worker Nodes are components, such as the Kubelet, Kube Proxy, and Container Runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Kubelet:&lt;/strong&gt; Is an agent that runs on each worker node in the cluster and ensures that the containers are running within a pod. These utilize ‘PodSpecs’, a YAML or JSOn object, which describe a pod and makes sure that the described containers in those PodSpecs are running healthily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kube Proxy (kube-proxy):&lt;/strong&gt; Is a network proxy that runs on each worker node and maintains network rules which allow communication to pods from inside or outside the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container Runtime:&lt;/strong&gt; This is the software that is responsible for running the containers. K8s supports several runtimes, such as Docker, Containerd, CRI-), and any implementation of the Kubernetes CRI.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P93sgKUj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/em172bzt120vw7cc1evk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P93sgKUj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/em172bzt120vw7cc1evk.png" alt="K8s meme" width="457" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a id="5"&gt;&lt;/a&gt;Let's get started using K8s
&lt;/h1&gt;

&lt;h2&gt;
  
  
  1) Install prerequisites
&lt;/h2&gt;

&lt;p&gt;We will be running through a quick tutorial for how we can work with K8s locally. Before we can get started doing anything you will need to have a few things installed. &lt;/p&gt;

&lt;p&gt;Be sure to have the latest versions of minikube, kubectl, and docker.&lt;/p&gt;

&lt;h2&gt;
  
  
  2) Creating nodes with Minikube
&lt;/h2&gt;

&lt;p&gt;Let's first try creating nodes using Minikube. For this, try the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;minikube start --nodes=2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this command, we tell minikube to start up and to create 2 nodes with the trailing flag.&lt;/p&gt;

&lt;p&gt;Note that if this is your first time using minikube then this process will take about 5 minutes or so to complete. &lt;/p&gt;

&lt;p&gt;Now, lets check the status of our nodes out by running:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;minikube status&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OmNyS_W7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4auukxoysaeyq0qngtu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OmNyS_W7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4auukxoysaeyq0qngtu.png" alt="Image description" width="390" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the control plane up top and about the worker at the bottom.&lt;/p&gt;

&lt;p&gt;If you were to run a &lt;code&gt;docker ps&lt;/code&gt; then you should see them running as 2 containers there as well. &lt;/p&gt;

&lt;p&gt;We can run &lt;code&gt;kubectl get nodes&lt;/code&gt; to view our nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oOOuxew2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jodna8hyac5gtjr4l0fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oOOuxew2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jodna8hyac5gtjr4l0fw.png" alt="Image description" width="519" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try running &lt;code&gt;kubectl get pods -A&lt;/code&gt; and we can view all pods in all namespaces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tJH7dkQ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mauuwgbpr91iepncuwdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tJH7dkQ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mauuwgbpr91iepncuwdd.png" alt="Image description" width="707" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you notice, these pods above are what make up the control plane. &lt;/p&gt;

&lt;h2&gt;
  
  
  3) Deployment
&lt;/h2&gt;

&lt;p&gt;To create a K8s deployment, the command syntax follows: `kubectl create deployment  --image=.&lt;/p&gt;

&lt;p&gt;We will use &lt;code&gt;kubectl create deployment nginx-depl --image=nginx&lt;/code&gt; for this tutorial.&lt;/p&gt;

&lt;p&gt;After you have run that then run &lt;code&gt;kubectl get deployment&lt;/code&gt; and you should see something like the following indicating that we now have a ready deployment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ihdvSRhs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mgkajjg3iymul9q6slek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ihdvSRhs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mgkajjg3iymul9q6slek.png" alt="Image description" width="495" height="41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then try running &lt;code&gt;kubectl get pod&lt;/code&gt; and that should look similar as below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C60JAN1X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hibh0knjh6zgw85hhycw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C60JAN1X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hibh0knjh6zgw85hhycw.png" alt="Image description" width="573" height="41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you would like to see if the desired state is matched with the actual current state then we can view the Replica Set with &lt;code&gt;kubectl get replicaset&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7RbtMw33--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ptx5d32nmxcajv7hdksw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7RbtMw33--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ptx5d32nmxcajv7hdksw.png" alt="Image description" width="573" height="41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a id="6"&gt;&lt;/a&gt;Conclusion
&lt;/h1&gt;

&lt;p&gt;I hope you all enjoyed this article. I had quite a bit of fun writing this article and I hope the illustration may help some of the information stick more effectively. Understanding of Kubernetes is a pretty important knowledge to have for anybody working in the Cloud and it would do you very well to get accustomed to working with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PS - I am actively looking for new work in the Cloud so please feel free to start a conversation if you are looking for a new addition to your team.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check me out here:&lt;br&gt;
&lt;a href="https://ghope.cloud/"&gt;Cloud Resume&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/GilHope/Cloud-Resume-Challenge-AWS"&gt;GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/gil-hope/"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--782Q0t6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/111xp0vzn894eop05n33.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--782Q0t6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/111xp0vzn894eop05n33.gif" alt="Frodo" width="660" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>containerapps</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The Containerization of The Ring</title>
      <dc:creator>GilHope</dc:creator>
      <pubDate>Mon, 19 Jun 2023 13:39:23 +0000</pubDate>
      <link>https://dev.to/gilhope/the-containerization-of-the-ring-115b</link>
      <guid>https://dev.to/gilhope/the-containerization-of-the-ring-115b</guid>
      <description>&lt;p&gt;The world is changing. I feel it in my local host. I feel it in my operating system. I smell it in the virtual machine.&lt;/p&gt;

&lt;p&gt;It began with the forging of the Great Docker Images. &lt;/p&gt;

&lt;p&gt;Three were given to the DevOps Engineers, immortal, wisest, and most difficult to explain to their friends and relatives. &lt;/p&gt;

&lt;p&gt;Seven to the Developers, great coders and application craftsmen. &lt;/p&gt;

&lt;p&gt;And nine, nine rings were gifted to the race of Operations, who, above all else, sought control and orchestration.&lt;/p&gt;

&lt;p&gt;But they were, all of them, deceived, for a new image update was made.&lt;/p&gt;

&lt;p&gt;In the land of DockerHub, in the fires of Mt Volume, the Dark Lord Sauron pushed in secret a new image, to create all new containers. And into its Dockerfile he set his base image, his run instructions, and his &lt;code&gt;ENTRYPOINT&lt;/code&gt; commands to be executed when the containers were created. One Docker Image to rule them all!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgq2i4mqbdum9vymslgp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgq2i4mqbdum9vymslgp.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One by one, the admins of Middle Earth pulled the new image&lt;br&gt;
And there were none of them who said “it works on my machine”! &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;In this blog I am going to attempt to force as many LOTR references as possible. I can't promise that they will all be good. If you have any better ones then please comment below as I would love to hear them! I hope you all enjoy! PS I am actively looking for new work in the Cloud so please feel free to start a conversation if you are looking for a new addition to your team.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check me out here:&lt;br&gt;
&lt;a href="https://ghope.cloud/" rel="noopener noreferrer"&gt;Cloud Resume Challenge Site&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/GilHope/Cloud-Resume-Challenge-AWS" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/gil-hope/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Why should you use Docker?&lt;/li&gt;
&lt;li&gt;Virtual Machines vs Containers&lt;/li&gt;
&lt;li&gt;How does Docker work?&lt;/li&gt;
&lt;li&gt;Dockerfiles&lt;/li&gt;
&lt;li&gt;Images and Containers&lt;/li&gt;
&lt;li&gt;DockerHub and Registries&lt;/li&gt;
&lt;li&gt;How does Docker streamline the development - pipeline?&lt;/li&gt;
&lt;li&gt;How do you manage multiple containers?&lt;/li&gt;
&lt;li&gt;
DIY Docker Image &lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a id="1"&gt;&lt;/a&gt;Why should you use Docker?
&lt;/h2&gt;

&lt;p&gt;Middle Earth is fantastical world full of many peoples, beasts, and fell creatures, and the local environments between them are not always the same. This is where Docker comes in. Docker allows your Fellowship to package your applications and its dependencies into an self-contained, isolated unit (called a Docker Container) that can run consistently across different environments, be it the Forrests of Fangorn, the rolling plains of Rohan, or the firey pits of Mordor. &lt;/p&gt;

&lt;p&gt;But the power of Docker doesn't stop at portability and consistent environments. Docker is an amazing tool that encompasses features for version control, CI/CD integration, security, isolation, and an extensive ecosystem which I will touch on as we move along. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a id="2"&gt;&lt;/a&gt;Virtual Machines vs Containers
&lt;/h2&gt;

&lt;p&gt;Imagine if the one ring were actually a virtual machine and Mr Frodo was tasked to carry it all the way across Middle Earth to chuck it in another developer's local machine instead of Mount Doom. That thang would be heeeeavy! Weighed down by the immense crushing weight of the operating system, Mr Frodo would probably never make it out of the Shire!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xf2evqsiz80bnetbtor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xf2evqsiz80bnetbtor.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the help of Docker containers, we can make Mr Frodo's burden so much less arduous! Containers, unlike VMs, are lightweight and isolated units that package an application and its dependencies, without the need for an entire OS. Especially if using the powers of Docker Compose, Frodo and Sam could carry many containers to Mordor's local host with ease, but we'll get to that later.&lt;/p&gt;

&lt;p&gt;Containers provide strong isolation between applications and their dependencies, ensuring you can "keep it secret and keep it safe" from the powers of any malicious Nazgûl that might seek to directly access or exploit vulnerabilities of their container's security as according to Gandalf's security instructions. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkqjm6oeswrn2c5g50lt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkqjm6oeswrn2c5g50lt.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a id="3"&gt;&lt;/a&gt;How does Docker work?
&lt;/h2&gt;

&lt;p&gt;At it's core Docker's architecture is its client-server model which is constituted of the Docker Client, Docker Daemon, and Docker Engine. The rest of its key features include images, containers, and registries, but I will get more into these later.&lt;/p&gt;

&lt;p&gt;The Docker Client is where your interaction as a user primarily takes place, be it through the CLI or Docker Desktop. When a command is executed, the Client translates this command into a REST API call for the Daemon.&lt;/p&gt;

&lt;p&gt;The Docker Daemon, is not a Balrog. Full stop. Disappointing, I know, but however, it is comparably powerful. The Daemon, as the server component of this architecture, processes requests sent by the Client. It builds, runs, and manages Docker containers, coordinating their lifecycles based on the command received. &lt;/p&gt;

&lt;p&gt;Sitting above the Client and Daemon, and encapsulating both, is the Docker Engine. The Docker Engine is an application installed on the host machine that provides the runtime capabilities to build and manage your Docker containers. It also is what allows the Daemon to interact with operating system's underlying resources, acting as the intermediary between your commands (via the Docker Client) and the execution of these commands (by the Docker Daemon).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xtqdw2c42y6bdiovgfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xtqdw2c42y6bdiovgfc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a id="4"&gt;&lt;/a&gt;Dockerfiles
&lt;/h3&gt;

&lt;p&gt;Think of Dockerfiles as the DNA of Docker Images... Or maybe in this context, how the cruelty, hate, and malice form the essence of the One Ring. When we issue the &lt;code&gt;docker build&lt;/code&gt; command, Docker reads these instructions, step by step, from the Dockerfile and builds your Docker Image. Each instruction represents a specific action, such as installing packages, copying files, or running commands.&lt;/p&gt;

&lt;p&gt;The syntax of Dockerfiles is quite simple and straightforward. Conventionally, instructions are made in &lt;em&gt;ALL CAPS&lt;/em&gt; so as to be distinguished from arguments. While specific Dockerfile commands will depend on your application's requirements, some essential and commonly used commands and some of which i'll be using in a quick tutorial later on in this blog:&lt;br&gt;
&lt;strong&gt;FROM:&lt;/strong&gt; This sets the base image on which your Docker Image will be built and is typically the first instruction in a Dockerfile.&lt;br&gt;
&lt;strong&gt;WORKDIR:&lt;/strong&gt; Sets the working directory inside the container where subsequent commands will be executed.&lt;br&gt;
&lt;strong&gt;RUN:&lt;/strong&gt; Executes commands inside the container. Often used for dependency install or application config.&lt;br&gt;
&lt;strong&gt;COPY:&lt;/strong&gt; Copies new files/folders from client machine to the container's file system. This is preferred for simple file copying.&lt;br&gt;
&lt;strong&gt;ADD:&lt;/strong&gt; Is similar to COPY except it allows features like URL downloads and extraction of compressed files.&lt;br&gt;
&lt;strong&gt;CMD:&lt;/strong&gt; Set the default executable of a container and arguments. Can be overridden at runtime by run parameters.&lt;br&gt;
&lt;strong&gt;ENTRYPOINT:&lt;/strong&gt; Similiar to CMD but it cannot be overridden.&lt;br&gt;
&lt;strong&gt;EXPOSE:&lt;/strong&gt; Informs Docker what ports the container application is running on. It should be noted that this does publish the ports but instead only documents it as metadata.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a id="5"&gt;&lt;/a&gt;Images and Containers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73oshoseweausiq6mcuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73oshoseweausiq6mcuw.png" alt="containers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker Images, or container images, are used to run containers. It is important to think of these as immutable templates which are read only. Docker Images are essentially snapshots of a Docker container's file system. When you create a Docker Image, they do not change. If you think you "changed" them then what you have actually done is created a completely new image entirely.&lt;/p&gt;

&lt;p&gt;When you launch a container using an image, a writable layer (sometimes referred to as the "container layer") is added. This writable layer is attached to the container and allows for data writes to occur within the container. Instead of altering the image itself, any changes or data writes happen within this writable layer. This is what makes the container mutable - changes are isolated to the running container and do not affect the underlying image. It also enables containers to have their own local storage and system data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84pkswn4edx609kbesxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84pkswn4edx609kbesxq.png" alt="containers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a id="6"&gt;&lt;/a&gt;DockerHub and Registries
&lt;/h2&gt;

&lt;p&gt;DockerHub is a cloud-based registry provided by Docker which serves as a centralized storage, sharing, and management platform for Docker Images. It's kind of like GitHub, but for Docker. There are other container registries, such as Amazon's Elastic Container Registry (ECR), Google's Container Registry (GCR), Azure's Container Registry (ACR), and open-source projects like Harbor. Fundamentally, they all do the same thing; give or take some additional features and pricing structures.&lt;/p&gt;

&lt;p&gt;Like GitHub, you split registries into repositories. Docker provides several commands to interact with these repositories. You can use the &lt;code&gt;docker pull&lt;/code&gt; command to retrieve images from a repo and the &lt;code&gt;docker push&lt;/code&gt; command to upload your own images. The naming convention used to specify a particular image within a repo goes like this &lt;code&gt;&amp;lt;username&amp;gt;/&amp;lt;repository&amp;gt;:&amp;lt;tag&amp;gt;&lt;/code&gt;, for example, &lt;code&gt;lsauron/containerofrings:latest&lt;/code&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a id="7"&gt;&lt;/a&gt;How does Docker streamline the development pipeline?
&lt;/h2&gt;

&lt;p&gt;In the realm of Middle-earth, Docker proves to be an invaluable tool in streamlining the development pipeline for Frodo and his companions. Here's how Docker facilitates a smooth and efficient development process throughout their quest:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous Integration and Deployment (CI/CD):&lt;/strong&gt; Docker integrates smoothly with popular CI/CD tools, enabling your Fellowship team to automate the build, test, and deployment processes. Leveraging Docker's containerization to package their applications and dependencies, your team can ensure consistency across the different stages of the pipeline. Automating these processes can save time, improve efficiency, and achieve fast iteration cycles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Versioning and Rollbacks:&lt;/strong&gt; Docker's versioning capabilities enable your Fellowship to keep track of your application's changes. That way, when you all get stuck by an avalanche along the Path of Caradhras, you can take a step back, realize that you may need to rollback to a previous state, and consider that perhaps taking the shortcut through Moria will be better &lt;em&gt;(probably not)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability and Orchestration:&lt;/strong&gt; As the darkness of Sauron grows in power and complexity, so too might your applications. Docker allows you to scale your applications horizontally, adding more containers as your load increases. Docker works seamlessly with orchestration tools like Kubernetes and Docker Swarm, making it easier to manage multiple containers and services, which can be a significant advantage for large-scale projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a id="8"&gt;&lt;/a&gt;How do you manage multiple containers?
&lt;/h2&gt;

&lt;p&gt;So what if Sam and Frodo needed to bring many Containers of Power for the local hosts of Mordor? Well, that's where Docker Compose comes in! Docker Compose is used for creating, managing, and cleaning up multi container applications and particularly in development. It uses a special YAML file called 'compose.yml', or the legacy naming 'docker-compose.yml', to specify the configuration and dependencies of multiple services making up your unified application. &lt;/p&gt;

&lt;p&gt;However, for larger production deployments spanning across multiple hosts or for cases where you need high availability, tools like Docker Swarm or Kubernetes are often more commonly used. These tools provide additional capabilities like load balancing, service discovery and scaling services across multiple Docker hosts.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a id="9"&gt;&lt;/a&gt;DIY Docker Image
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimix3dyrjoz9sc6ixqe6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimix3dyrjoz9sc6ixqe6.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker installed on your machine&lt;/li&gt;
&lt;li&gt;DockerHub account &lt;/li&gt;
&lt;li&gt;Python3 installed on your machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you would just like to try running a custom made image I created for this blog then all you have to do is open your terminal and run &lt;code&gt;docker pull ghope94/repo-of-the-ring:latest&lt;/code&gt;. Here's what you should see returned:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lpravq3tz2dfamccum7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lpravq3tz2dfamccum7.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;What just happened is that Docker first checked the local host and realized the files were not present. By default, it in these events it will then try to pull the information from DockerHub.&lt;/p&gt;

&lt;p&gt;Moving on, now try entering &lt;code&gt;docker run ghope94/repo-of-the-ring:latest "Hello, Middle Earth"&lt;/code&gt; and you should see a cool ASCII art depiction of Gollum!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18sf7ggm84c8y1vnh2en.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18sf7ggm84c8y1vnh2en.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you would like to know how I was able to create this Docker Image from scratch then keep reading (don't worry, it's easy)...&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Create a repo in DockerHub&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; Open your favorite code editor and create a new folder. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; For this tutorial, we are going to have to create three files. A text file for our custom ASCII art, our Dockerfile, and a Python script which will read our ASCII file, format it with a custom message, and then print to the console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.&lt;/strong&gt; Source or create your own ASCII image and add it to your text file and save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frd7hqm8vm8a1xo7w1byk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frd7hqm8vm8a1xo7w1byk.png" alt="ASCII art"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.&lt;/strong&gt; Copy this python code and replace everything with your own corresponding information. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdoit2vtle8ylwhydbkp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdoit2vtle8ylwhydbkp.png" alt="Python code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's what is happening in this script:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The smeagolsay function reads the ASCII art from smeagol.txt and formats it with the input text to display.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the main execution block, the script first imports the sys module to access command-line arguments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If arguments were passed to the script, they are combined into a single string and passed to the smeagolsay function. If no arguments were provided, a default string is used instead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The return value of smeagolsay, which includes the ASCII art and the input text, is printed to the console.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6.&lt;/strong&gt; Create your first Dockerfile! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2ebm12xq7cwvv6zq9r2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2ebm12xq7cwvv6zq9r2.png" alt="Dockerfile code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's what going on in this Dockerfile:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;FROM python:3.8-slim-buster&lt;/code&gt;: This line is telling Docker to use a pre-made Python image from Docker Hub. The version of Python is 3.8 and it is based on a "slim-buster" image, which is a minimal Debian-based image. This is the base image for your Docker container.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;WORKDIR /app&lt;/code&gt;: This line is setting the working directory in your Docker container to &lt;code&gt;/app&lt;/code&gt;. This means that all subsequent actions (such as copying files or running commands) will be performed in this directory.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;COPY ./gollum.py /app/gollum.py&lt;/code&gt;: This line is copying the local file &lt;code&gt;gollum.py&lt;/code&gt; into the Docker container, placing it in the &lt;code&gt;/app&lt;/code&gt; directory and keeping its name as &lt;code&gt;gollum.py&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;COPY ./smeagol.txt /app/smeagol.txt&lt;/code&gt;: This line is doing the same thing as the previous line, but for the smeagol.txt file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ENTRYPOINT ["python", "/app/gollum.py"]&lt;/code&gt;: This line is specifying the command to be executed when the Docker container is run. In this case, it's running your Python script.&lt;/p&gt;

&lt;p&gt;You've now finished this phase of the setup. We'll move over to the terminal next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7.&lt;/strong&gt; Open up your terminal and &lt;code&gt;cd&lt;/code&gt; into to the appropriate directory where you are storing the files we just created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8.&lt;/strong&gt; In the commandline, run &lt;code&gt;docker build -t &amp;lt;your-dockerhub-user&amp;gt;/&amp;lt;your-repo&amp;gt;:latest .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9.&lt;/strong&gt; Then &lt;code&gt;docker run &amp;lt;your-dockerhub-user&amp;gt;/&amp;lt;your-repo&amp;gt;:latest "Hello, Middle Earth"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10.&lt;/strong&gt; If you would like you can push your new image to your DockerHub repo using &lt;code&gt;docker push &amp;lt;your-dockerhub-user&amp;gt;/&amp;lt;your-repo&amp;gt;:latest&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And that's it! You now have created your very own personal Gollum! Good luck with that..&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a id="10"&gt;&lt;/a&gt;Conclusion
&lt;/h1&gt;

&lt;p&gt;Docker has emerged as a powerful tool in the software development landscape, offering portability, consistency, and efficient resource utilization. It streamlines the development pipeline, enabling continuous integration, version control, and scalable deployments. With Docker, developers can easily manage multiple containers, leverage container orchestration platforms, and benefit from a thriving ecosystem. Embracing Docker allows us to navigate the complexities of software development while harnessing the magic of containerization to create robust and resilient applications.  &lt;/p&gt;

&lt;p&gt;&lt;em&gt;I hope you you found this atleast as mildly amusing to read as I did to write it up. There were a lot of even more forced LOTR references - I decided to spare you all.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foh66qctgqbe9n4umm4mb.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foh66qctgqbe9n4umm4mb.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>beginners</category>
      <category>lordoftherings</category>
    </item>
    <item>
      <title>Conquering the Cloud: My Adventure with the Cloud Resume Challenge</title>
      <dc:creator>GilHope</dc:creator>
      <pubDate>Wed, 07 Jun 2023 19:56:30 +0000</pubDate>
      <link>https://dev.to/gilhope/conquering-the-cloud-my-adventure-with-the-cloud-resume-challenge-5ak6</link>
      <guid>https://dev.to/gilhope/conquering-the-cloud-my-adventure-with-the-cloud-resume-challenge-5ak6</guid>
      <description>&lt;center&gt;
Late last night...&lt;br&gt;
&lt;br&gt;
... At approximately 02:00 hours, while many slept, a colossal breakthrough was made ...&lt;br&gt;
&lt;br&gt;
... After many long nights and countless eons ...&lt;br&gt;
&lt;br&gt;
I finally completed my Cloud Resume Challenge Project !!!!!!!1!!!1!1!
&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.gifer.com%2F7RWA.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.gifer.com%2F7RWA.gif" alt="GIF"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can find my Cloud Resume Project &lt;a href="https://ghope.cloud/" rel="noopener noreferrer"&gt;HERE&lt;/a&gt;!! &lt;br&gt;
&lt;a href="https://github.com/GilHope/Cloud-Resume-Challenge-AWS" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/gil-hope/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, I am actively looking for an opportunity to prove myself and develop my skills with my first job in the cloud! If you could please like and share I would immensely appreciate any and all help!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contents&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Challenge&lt;/li&gt;
&lt;li&gt;Certification&lt;/li&gt;
&lt;li&gt;Front End&lt;/li&gt;
&lt;li&gt;Back End&lt;/li&gt;
&lt;li&gt;IaC / CICD / Version Control&lt;/li&gt;
&lt;li&gt;Testing&lt;/li&gt;
&lt;li&gt;Challenges and Future Improvements&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;The Cloud Resume Challenge is a 16-step challenge designed for beginners to get hands-on experience with some of the tools and applications that DevOps Engineers and Cloud Architects use in their day-to-day tasks.&lt;/p&gt;

&lt;p&gt;The steps outlined in the CRC were intentionally high-level and incredibly vague, providing a fun and sometimes challenging learning opportunity.&lt;/p&gt;
&lt;h2&gt;
  
  
  Certification
&lt;/h2&gt;

&lt;p&gt;The first steps of the CRC recommended preparing to take the AWS Cloud Practitioner Exam. However, fancying myself an overachiever, I decided to pursue the AWS Solutions Architect Associate Course (SAA-C03) instead.&lt;/p&gt;

&lt;p&gt;So, I signed up for &lt;a href="https://cantrill.io/" rel="noopener noreferrer"&gt;Adrian Cantrill's AWS Certified Solutions Architect - Associate Course&lt;/a&gt; and &lt;a href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-practice-exams-saa-c03/" rel="noopener noreferrer"&gt;Jon Bonzo’s AWS Certified Solutions Architect Associate Practice Exams&lt;/a&gt; and I am glad I did. &lt;/p&gt;

&lt;p&gt;In March of '23, I passed the &lt;a href="https://www.credly.com/badges/e9ba9344-0339-4895-9df8-19dadc4057ce" rel="noopener noreferrer"&gt;Cloud Practitioner&lt;/a&gt; exam, and finally, the &lt;a href="https://www.credly.com/badges/e108acb8-64be-4013-90b2-d055dbc69d5c" rel="noopener noreferrer"&gt;Solutions Architect Associate&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I was quite nervous leading up to my exam day for SAA-C03, but when it finally came time to sit down for it... I immediately realized how well Cantrill's and Bonzo's services had prepared me.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a id="front-end"&gt;&lt;/a&gt;Front End
&lt;/h2&gt;

&lt;p&gt;Alright, so getting into the real meat ‘n potatoes, this is what I did…&lt;/p&gt;

&lt;p&gt;The challenge required writing up my resume in HTML and CSS… Which I had some light experience with, but I did a some speedruns through a few free courses, found an example of a resume I liked, and mixed it all up and got that working locally in VSCode…&lt;/p&gt;

&lt;p&gt;Next, I needed to create a domain using Route53 and setup a Cloudfront Distribution with a S3 Bucket as the origin… I found these sections to be pretty simple and straightforward… Again, I believe Adrian Cantrill’s course exceptionally prepared me for this area with the practical knowledge covered between his lectures and tutorials.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a id="back-end"&gt;&lt;/a&gt;Back End
&lt;/h2&gt;

&lt;p&gt;For this section, I needed to create a visitor counter for my site using Lambda, DynamoDB, and API Gateway, and some Javascript…&lt;/p&gt;

&lt;p&gt;Figuring out the Javascript was by far the most difficult part of this challenge for me… I especially struggled trying to figure out how to make sure I could fetch my API URL dynamically.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a id="iac--cicd--version-control"&gt;&lt;/a&gt;IaC / CICD / Version Control
&lt;/h2&gt;

&lt;p&gt;When first approaching these chunks I stalled for some time to do as much reading up as I could to understand. I had heard of these terms in passing but they were always quite vague to me as to what exactly they were and how they worked.&lt;/p&gt;

&lt;p&gt;Setting up Version Control and my CICD pipelines seemed like a pretty daunting order at first, but after skimming through the GitHub Documentations I found these to be some of the easier steps to complete. &lt;/p&gt;

&lt;p&gt;Getting to the Infrastructure-as-Code, I initially debated between using AWS SAM or Terraform as I would be learning about either completely fresh anyways. Ultimately, I went with SAM for its simplicity of use with serverless. &lt;/p&gt;

&lt;p&gt;As I started to figure it out I had a surprising amount of fun working with SAM… And it was without a doubt my favorite part of the challenge.&lt;/p&gt;

&lt;p&gt;This article, &lt;a href="https://arcadian.cloud/delivery/2023/05/03/one-artifact-one-pipeline-one-path-to-production/" rel="noopener noreferrer"&gt;One Artifact, One Pipeline, One Path to Production&lt;/a&gt;, was particularly beneficial for helping get me started in this section.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a id="testing"&gt;&lt;/a&gt;Testing
&lt;/h2&gt;

&lt;p&gt;This was another especially challenging portion of the challenge for me and it took a significant amount of tie for me to get this hammered out, but boy did it feel good when I got those 'PASSED' outputs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia.tenor.com%2F29eE-n-_4xYAAAAd%2Fatomic-nuke.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia.tenor.com%2F29eE-n-_4xYAAAAd%2Fatomic-nuke.gif" alt="GIF"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a id="challenges-and-future-improvements"&gt;&lt;/a&gt;Challenges and Future Improvements
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt;&lt;br&gt;
Improve the API functionality of the visitor counter to be more efficient in distinguishing repeat visitors.&lt;br&gt;
&lt;strong&gt;2.&lt;/strong&gt;&lt;br&gt;
I would probably like to add a little bit more testing for my Lambda code.&lt;br&gt;
&lt;strong&gt;3.&lt;/strong&gt;&lt;br&gt;
I may consider reworking the IaC with Terraform. Initially, I had explored the idea of using Terraform but decided I’d be better off going forward with SAM as it seemed more intuitive for me… Also, Terraform just has a cooler sci-fi sounding name!&lt;/p&gt;


&lt;center&gt;
&lt;br&gt;
Ok, now...&lt;br&gt;&lt;br&gt;
&lt;br&gt;

&lt;p&gt;Crab rave&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;/p&gt;

&lt;center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia3.giphy.com%2Fmedia%2F2dK0W3oUksQk0Xz8OK%2Fgiphy.gif%3Fcid%3Decf05e47gad2s9j2n0xynqte3xogceai0va6q1mcsouwdqdh%26ep%3Dv1_gifs_search%26rid%3Dgiphy.gif%26ct%3Dg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia3.giphy.com%2Fmedia%2F2dK0W3oUksQk0Xz8OK%2Fgiphy.gif%3Fcid%3Decf05e47gad2s9j2n0xynqte3xogceai0va6q1mcsouwdqdh%26ep%3Dv1_gifs_search%26rid%3Dgiphy.gif%26ct%3Dg"&gt;&lt;/a&gt; &lt;/p&gt;
&lt;/center&gt;


&lt;/center&gt;

</description>
      <category>cloudskills</category>
      <category>aws</category>
    </item>
    <item>
      <title>Amazon Macie: A Comprehensive Data Security and Privacy Solution</title>
      <dc:creator>GilHope</dc:creator>
      <pubDate>Sat, 01 Apr 2023 03:03:31 +0000</pubDate>
      <link>https://dev.to/gilhope/amazon-macie-blog-5ci0</link>
      <guid>https://dev.to/gilhope/amazon-macie-blog-5ci0</guid>
      <description>&lt;p&gt;Wrote a quick blog post on Amazon Macie tonight. You can find it at &lt;a href="https://arcadian.cloud"&gt;Arcadian Cloud&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
