<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ashirwad Pradhan</title>
    <description>The latest articles on DEV Community by Ashirwad Pradhan (@ashirwadpradhan).</description>
    <link>https://dev.to/ashirwadpradhan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ashirwadpradhan"/>
    <language>en</language>
    <item>
      <title>Getting started with Kubernetes on Minikube</title>
      <dc:creator>Ashirwad Pradhan</dc:creator>
      <pubDate>Thu, 12 May 2022 19:32:38 +0000</pubDate>
      <link>https://dev.to/ashirwadpradhan/getting-started-with-kubernetes-on-minikube-5el6</link>
      <guid>https://dev.to/ashirwadpradhan/getting-started-with-kubernetes-on-minikube-5el6</guid>
      <description>&lt;p&gt;Kubernetes is a container orchestrator which helps to manage and scale application running on containers.&lt;br&gt;
Setting up a production grade Kubernetes cluster is a hassle when it comes to provision different kind of nodes which manages a Kubernetes workload. In a Kubernetes cluster, there are two kinds of nodes &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Master Nodes (control plane) :&lt;/strong&gt; These nodes manage and control how the application are "orchestrated". This simply means master nodes keeps track of current application state, does the current state of the deployment matches the desired state and so on. To sum it up, this is the &lt;strong&gt;brain&lt;/strong&gt; of the Kubernetes cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Worker Nodes:&lt;/strong&gt; These are the actual nodes where the containers are provisioned and run. These are the nodes that actually handle the workload.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: There are other components to a Kubernetes cluster like Controller Manager, etcd etc. but that is beyond the scope of this post&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For starters, knowing the above information is just right to get started with some hands-on on Kubernetes. While we can set up our own VM's as control nodes and worker nodes, but it is too much hassle to install all the components required to run a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Therefore, we will take use Minikube that comes with all the bells and whistles that are required to run a Kubernetes cluster with just few commands. Let us get started with some hands-on now. Fire up that terminal of your choice and start following&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Assumption: You must have docker-desktop installed on your machine to follow along. Check if docker is installed on your system by running the following command in your terminal&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see something like this, then you are good to go 😄&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ➜ docker -v
Docker version 20.10.12, build e91ed57
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing Minikube
&lt;/h3&gt;

&lt;p&gt;For macOS,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Windows,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;choco install minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Linux,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do not have &lt;code&gt;brew&lt;/code&gt; installed on macOS or &lt;code&gt;chocolatey&lt;/code&gt; installed on Windows, you can alternatively run:&lt;/p&gt;

&lt;p&gt;For macOS,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Windows,&lt;br&gt;
Follow instructions on &lt;a href="https://minikube.sigs.k8s.io/docs/start/"&gt;Minikube for Windows&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Start Minikube cluster
&lt;/h3&gt;

&lt;p&gt;On your terminal, run the following command,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start --vm-driver=docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now see Minikube spinning up Kubernetes cluster for you. If you get output something like this, then you are good to go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ➜ minikube start --vm-driver=docker
😄  minikube v1.25.2 on Darwin 12.3.1
🆕  Kubernetes 1.23.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.3
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now view the nodes created by Minikube by running &lt;code&gt;kubectl get nodes&lt;/code&gt; command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ➜ kubectl get nodes                                                                                                                                                                                                
NAME       STATUS   ROLES                  AGE   VERSION
minikube   Ready    control-plane,master   23d   v1.23.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, you now have a single node Minikube cluster running on your local machine. In the output you can see something interesting. There is only one node, which is the master node.&lt;/p&gt;

&lt;p&gt;Now, you might be wondering where is the worker node that I told you at the beginning of this post. Well, Minikube creates a single node cluster by default, meaning that a single node act as both master and worker node.&lt;/p&gt;

&lt;p&gt;This setup is fine since we are using this for our local development purpose. In actual production grade setup, the setup has multiple master and multiple worker node for high availability and to make the system fault-tolerant.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Minikube also supports multi-node setup but that is a discussion meant for some other post. We will just work with single node cluster for this tutorial.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that we have the setup ready, we can now run our applications on this setup.&lt;br&gt;
For the purpose of this tutorial we will use an &lt;code&gt;echo-server&lt;/code&gt; by Google Cloud which when sent an HTTP request will echo out details about the HTTP request and client who sent that request.&lt;/p&gt;
&lt;h3&gt;
  
  
  Key Concepts
&lt;/h3&gt;

&lt;p&gt;Before getting into the deployment, let us understand three basics concepts when it comes to Kubernetes. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pods:&lt;/strong&gt; The smallest compute unit in a Kubernetes setup. The pods are the place where containers run (where the application code runs). Although a pod can run multiple containers, usually we see pods running one or two containers. For horizontal scaling, Kubernetes spawns multiple pods in a node so that the underlying application can scale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt; This is a way we define a pod configuration. A pod configuration consists of what image to run, the number of replicas a pod can have, resource configuration etc. In short, a deployment is a way to run pods running containers at scale. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service:&lt;/strong&gt; A service helps us to expose the underlying ports in the containers that we want to send network requests to. So services are tied to pods, making it a means to communicate with our application. No matter how many pods are running in the background we cannot connect to the service and in turn the Kubernetes service can help us communicate with the application. Kubernetes' service also act as a load balancer for the pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have enough background, let us deploy our first pod in the Kubernetes cluster on Minikube.&lt;/p&gt;
&lt;h3&gt;
  
  
  Your First Deployment
&lt;/h3&gt;

&lt;p&gt;Run the following command in your terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we have now created a deployment with a bare minimum specification which is a &lt;code&gt;name&lt;/code&gt; for the deployment, and we have specified the container &lt;code&gt;image&lt;/code&gt; to run.&lt;br&gt;
Here &lt;code&gt;hello-minikube&lt;/code&gt; is the name of the deployment and the image is specified using &lt;code&gt;--image&lt;/code&gt; argument.&lt;/p&gt;

&lt;p&gt;Now, let us verify if the pod is actually running. To see the pods running, run the following command &lt;code&gt;kubectl get pods&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ➜ kubectl get pods                                                                                                                                                                                                 
NAME                              READY   STATUS    RESTARTS   AGE
hello-minikube-7bc9d7884c-cb2fs   1/1     Running   0          78m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pods are in RUNNING status and 1 replica is running. The 1/1 means that we have 1 replica running, currently out of 1 desired replica.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exposing through a Service
&lt;/h3&gt;

&lt;p&gt;Now that our echo server is running, How can we access the server ? If you now try to access &lt;a href="http://localhost:8080"&gt;localhost:8080&lt;/a&gt; you will see &lt;em&gt;Unable to connect&lt;/em&gt; error.&lt;/p&gt;

&lt;p&gt;As I already mentioned, to access/communicate with pods, we need to expose the pods with a service. Let us create a service and expose port &lt;code&gt;8080&lt;/code&gt; of the &lt;code&gt;hello-minikube&lt;/code&gt; deployment. To create a service, run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl expose deployment hello-minikube --type=NodePort --port=8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command tells Kubernetes to expose the port &lt;code&gt;8080&lt;/code&gt; of the pods of &lt;code&gt;hello-minikube&lt;/code&gt; deployment. The service of type &lt;em&gt;NodePort&lt;/em&gt;. Kubernetes has different types of services namely NodePort, ClusterIP, LoadBalancer etc. &lt;br&gt;
We will dive deeper into the types of services in some other post, but for now let us just assume that using NodePort we can direct external traffic (HTTP request from host machine) to the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Now let us run &lt;code&gt;kubectl get service&lt;/code&gt; command to see the details of the service that is created&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ➜ kubectl get service                                                                                                                                                                                              
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-minikube   NodePort    10.103.34.250   &amp;lt;none&amp;gt;        8080:32262/TCP   78m
kubernetes       ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP          21d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the output, we see that a NodePort service named &lt;code&gt;hello-minikube&lt;/code&gt; is created. So now let us hit &lt;a href="http://localhost:8080"&gt;localhost:8080&lt;/a&gt;. Uhhh Oh! We still cannot reach the &lt;code&gt;echo-server&lt;/code&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  Port Forwarding
&lt;/h3&gt;

&lt;p&gt;There is one missing piece of the puzzle still left so that we can access the echo-server. So let us understand why is this not working even after exposing the deployment using the &lt;code&gt;hello-minikube&lt;/code&gt; service?&lt;br&gt;
The Minikube setup basically runs a VM where the application pods are running. We are trying to access the port that is exposed on the Minikube VM from our host machine. That is the reason the communication is not happening. Basically, the network does not know how to route traffic from host machine to Minikube VM. &lt;/p&gt;

&lt;p&gt;To enable the routing of traffic from host machine, run &lt;code&gt;kubectl port-forward service/hello-minikube 8080:8080&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ➜ kubectl port-forward service/hello-minikube 8080:8080                                                                                                                                                            
Forwarding from 127.0.0.1:8080 -&amp;gt; 8080
Forwarding from [::1]:8080 -&amp;gt; 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let us try to access our &lt;code&gt;echo-server&lt;/code&gt; &lt;a href="http://localhost:8080"&gt;localhost:8080&lt;/a&gt;. Viola!! We are able to access our &lt;code&gt;echo-server&lt;/code&gt; and it give us the see the following output. Here we can see the details of our request sent to the &lt;code&gt;echo-server&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=en-US,en;q=0.5
connection=keep-alive
host=localhost:8080
sec-fetch-dest=document
sec-fetch-mode=navigate
sec-fetch-site=none
sec-fetch-user=?1
upgrade-insecure-requests=1
user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
BODY:
-no body in request-
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command &lt;code&gt;kubectl port-forward service/hello-minikube 8080:8080&lt;/code&gt; allow us to port forward our request from our host machine at port 8080 to the pods port 8080.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;So to wrap up we have successfully &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up a Kubernetes cluster &lt;/li&gt;
&lt;li&gt;Run pods and deployment&lt;/li&gt;
&lt;li&gt;Expose deployment using Kubernetes service&lt;/li&gt;
&lt;li&gt;Port-forwarded traffic from host machine to the Minikube VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;I hope now you are a little more comfortable in starting your own journey in Kubernetes. Feel free to share your journey with me!!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you liked what you read, please share and leave a like!&lt;/em&gt; 😄&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>minikube</category>
      <category>docker</category>
      <category>container</category>
    </item>
    <item>
      <title>All about Serverless vs Containers</title>
      <dc:creator>Ashirwad Pradhan</dc:creator>
      <pubDate>Thu, 12 May 2022 08:18:59 +0000</pubDate>
      <link>https://dev.to/ashirwadpradhan/all-about-serverless-vs-containers-49ch</link>
      <guid>https://dev.to/ashirwadpradhan/all-about-serverless-vs-containers-49ch</guid>
      <description>&lt;p&gt;Let us try to understand what the term &lt;code&gt;Serverless&lt;/code&gt; and &lt;code&gt;Containers&lt;/code&gt; mean:&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless
&lt;/h3&gt;

&lt;p&gt;For an application to be termed as serverless, the application and all of its components should satisfy these four characteristics.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There are no servers to provision or manage&lt;/li&gt;
&lt;li&gt;The serverless environment should be able to scale out or scale in the application as the load (traffic) increases or decreases&lt;/li&gt;
&lt;li&gt;The environment should enable our application to be highly available &lt;/li&gt;
&lt;li&gt;Highly cost optimized, i.e. you do not pay when your application is idle&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a service fulfills all the above criteria, then it is essentially a serverless service. Some well known serverless offerings by AWS are Lambda (compute environment), DynamoDB (NoSQL database), API gateway (reverse proxy). &lt;/p&gt;

&lt;p&gt;But for the sake of this discussion, let us just focus on compute service, which is AWS Lambda.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Lambda
&lt;/h4&gt;

&lt;p&gt;To explain briefly, AWS Lambda is a compute service offered by AWS which enables you to run code without provisioning or managing any servers. We will learn about other characteristics of Lambda later in this post.&lt;/p&gt;

&lt;p&gt;Now let us see what containers are:&lt;/p&gt;

&lt;h3&gt;
  
  
  Containers
&lt;/h3&gt;

&lt;p&gt;Containers are a single cohesive unit which packages code and all of it dependencies together in a unit. When provided with an appropriate container runtime environment, each instance of the container behaves exactly the same. The applications run reliably when deployed on multiple computing environments. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;For example: A container packaged on Windows when run on Linux with the same container runtime will run exactly the same way. This helps us to deploy and run our application on any environment, as long as it supports the appropriate container runtime.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Container runtime: A set of programs or process required for packaged unit of code and its dependencies (called container images) to run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But simply running a single container is not enough. Most of the time, our applications are too complex just to run on single containers. A typical application may have a frontend container talking to a backend container, which might be storing data in some database container. When we think about these kinds of applications, there are multiple considerations that we need to make. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;For example: The group of containers should be able to talk to each other over a network. They should be able to scale up and down depending on traffic to each.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;So to manage these aspects, we use &lt;em&gt;container orchestrators&lt;/em&gt; to help us easily manage the application running on the containers for us.&lt;br&gt;
Some popular container orchestrators are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker Swarm&lt;/li&gt;
&lt;li&gt;Kubernetes&lt;/li&gt;
&lt;li&gt;AWS EKS&lt;/li&gt;
&lt;li&gt;AWS ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have some idea of what each term means, let us do some comparisons between Serverless and Containers based on some factors:&lt;/p&gt;

&lt;h3&gt;
  
  
  Runtime Environments
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;For Serverless (AWS Lambda),&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Infrastructure is completely managed by cloud provider&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scales in and out automatically&lt;/li&gt;
&lt;li&gt;We do not need to worry about OS patching, software upgrades of the underlying infrastructure&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Cannot install any custom application (e.g. web server like &lt;code&gt;Apache&lt;/code&gt; or reverse proxy like &lt;code&gt;Nginx&lt;/code&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can install libraries/code dependencies&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory in AWS Lambda is limited to a max of 10 GB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compute time for AWS Lambda is also limited to 15 minutes or 900 seconds. If the workload exceeds this time limit, we get an Exception and the Lambda instance stops the computation immediately.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Aims to solve a particular problem without the hassle of installing softwares or managing infrastructure. Just start writing business logic and focus on solving the problem.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;For Containers (AWS EKS),&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Infrastructure is managed by the user&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is user's responsibility to manage the control plane (master) nodes on EC2. All the components related to EC2 like AMI rehydration, EC2 Scaling and making it highly available is up to the user to configure&lt;/li&gt;
&lt;li&gt;The user decides on VM size, Memory size and other aspects of the node&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can use container to package any custom/third-party application (e.g. MongoDB, MySQL, Nginx etc.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory is managed similar to configuration of EC2 instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can choose between different classes of EC2 instance (t, c, m etc.)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Gives users a plethora of choices ranging from customizing the hardware and use any software/application&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Where they fit ?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;For Serverless (AWS Lambda),&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The main use case where AWS Lambda shows it true power in event driven architectures. It has built in integration with lots of AWS services e.g. AWS S3, AWS SNS, AWS SQS etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;For example: If we want to process a file as soon as it is put into S3, we can set a PutEvent notification from AWS S3 to SNS topic, where an AWS Lambda has its trigger set to that SNS topic. As soon as a file is uploaded onto AWS S3 it triggers the SNS topic which in triggers the AWS Lambda. Now the AWS Lambda can pick up the file path from the notification and process it&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is better suited when traffic is &lt;strong&gt;sporadic and unpredictable&lt;/strong&gt;. Since AWS Lambda automatically scales in and out based on traffic, therefore it is cost-effective. During the times when there is no traffic we pay almost nil since our lambda is not invoked at all&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Lambda is suited for microservices as long as it does not depend on third party software. However, code dependencies can be installed&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;For Containers (AWS EKS),&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The main use case where container is better when compared to AWS Lambda is when we want a faster migration to cloud. Since we can have any third party applications running on containers, we can easily spin up our choice of web server or database on the cloud. This is not possible with AWS Lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is better suited for &lt;strong&gt;continuous and predictable workload&lt;/strong&gt;. Since we will always have some minimum amount of pods on the worker nodes always running along with the control plane (master) we will incur costs even when there is no traffic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Containers is very well suited for microservices as it can package and run any kind of third party application or software&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let us look at how they scale in presence of traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;For Serverless (AWS Lambda),&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;For every request to AWS Lambda, it spawns a new instance to process concurrent requests. After an AWS Lambda instance has finished processing one request, it becomes available to process the next one&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A major disadvantage of AWS Lambda is the cold start time which is incurred when an AWS Lambda instance is spawned for the first time. The cold start time can be significant for a high performance or critical application&lt;/li&gt;
&lt;li&gt;To eliminate this disadvantage, we can set up provisioned capacity for AWS Lambda so that every time we have some instance of Lambda in a warm state&lt;/li&gt;
&lt;li&gt;Lambda scaling is limited to 1000 instances of lambda per region per account, by default. Therefore, theoretically, it can support only up to 1000 concurrent requests. However, this can be increased to several thousand of instance by requesting a quota increase&lt;/li&gt;
&lt;li&gt;AWS Lambda is not suited for long-running workloads because it has an execution time limit of 15 minutes or 900 seconds.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;For Containers (AWS EKS),&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A pod in the container is able to handle multiple requests before having the need to spun up another pod to service more requests

&lt;ul&gt;
&lt;li&gt;When the traffic further increases, a new worker node is spawned and a new pod is deployed on the new worker node&lt;/li&gt;
&lt;li&gt;Here also whenever a new pod or worker node is spun up we do experience a cold start delay but still it is better than the cold start delay to process every concurrent requests&lt;/li&gt;
&lt;li&gt;Here we incur more costs for under-utilized resources. Suppose our worker node is capable of running 3 pods and the traffic is just enough that it requires 4 pods to satisfy the SLA. So in that case we require one more worker node running 1 pod. But since a worker node is spawned which is basically an EC2 instance, we have to pay for the whole EC2 instance even if we are utilizing 1 pod worth of resources.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;After learning about all those points about serverless and containers, we can conclude that one is not better than the other. It basically depends on the use case.&lt;br&gt;
Think Serverless &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When traffic is spordic and unpredictable and when we have run a short lived job. It is cost optimized but compromises on flexibility on the kinds of applications it can run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Think Containers&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When traffic is spread out and predictable. Even though it may costs more in some cases but it provides a complete flexibilty on the kind of hardware and software/applications we want to run.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>serverless</category>
      <category>container</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS EKS vs AWS ECS... Confused ?</title>
      <dc:creator>Ashirwad Pradhan</dc:creator>
      <pubDate>Thu, 12 May 2022 06:51:24 +0000</pubDate>
      <link>https://dev.to/ashirwadpradhan/aws-eks-vs-aws-ecs-confused--3jji</link>
      <guid>https://dev.to/ashirwadpradhan/aws-eks-vs-aws-ecs-confused--3jji</guid>
      <description>&lt;p&gt;Let us start with a simple introduction of all the three offerings from AWS. &lt;/p&gt;

&lt;h3&gt;
  
  
  AWS EKS (Elastic Kubernetes Service)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;This is a standard Kubernetes offering from AWS which helps you to manage your Kubernetes application on the cloud. Using this service, you can deploy and scale your Kubernetes service as you would do with an on-premise solution.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;On-premise - It is a term used for services (web server, database, kubernetes cluster etc.) that is self-hosted and self managed on your own private hardware.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  AWS ECS (Elastic Container Service)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;This is a fully managed container orchestration service created and managed by AWS. You can think of this as &lt;code&gt;AWS' own implementation of Kubernetes like service&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have seen what each service means on a high level, let's dive more deep into it. &lt;/p&gt;

&lt;h3&gt;
  
  
  How AWS EKS and AWS ECS are similar ?
&lt;/h3&gt;

&lt;p&gt;The goal of both these services is to run and scale your application containers.&lt;br&gt;
In EKS the container run within a &lt;code&gt;pod&lt;/code&gt;, while in ECS the container runs within a &lt;code&gt;task&lt;/code&gt;. In both the AWS services, the worker nodes where pods/tasks run are essentially an EC2 instance which is managed by you. You are responsible for creating and managing the EC2 cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Difference between EKS and ECS ? When to prefer one over the other ?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When it comes to EKS, it is an open source implementation of Kubernetes on AWS, but ECS is an AWS proprietary technology. So if you want multi-cloud support or if you already have a Kubernetes cluster running elsewhere (on-premise or other cloud vendors) then choosing EKS over ECS makes more sense because you can easily move over your existing application over to EKS with very minimal spec change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want to optimize the cost and your application was not deployed on any container orchestration service before, then you may want to prefer ECS. With ECS, you do not have to pay for control plane (master) nodes, and you only pay for worker nodes that your cluster has. In contrast with EKS you pay for control plane (master) nodes as well as worker nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If your application uses a lot of AWS services, the ECS might be a better option because it provides out of the box integration with a lot of AWS services like IAM (Identity Access and Management), ELB (Elastic Load Balancer) etc. With EKS the integration is not as rich as ECS, but it is not too bad either. You may have to manage your own middleware (integration with other AWS Services) in some cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you have a very specific use-case where you want to manage your own control plane nodes, then you can do that with EKS. AWS has a provision to allow you to manage your own control plane nodes by spinning up your own control plane in an AWS EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Choose ECS,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to run on less cost and your application was not using any container orchestration service earlier. You will get better integration with other AWS services. The downside is once you move your application to ECS you will loose the flexibility to move to other cloud vendors or move to an on-prem solution if need be.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Choose EKS,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you already have an Kubernetes based application running elsewhere(either on-premise or on some other cloud vendor) and you want to move into AWS. Though it is more costly than ECS but you can provision and manage your own control plane nodes exactly as you like. With that said you do have to manage some of the middleware to other AWS services using your own code.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>containers</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
