<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vladimir Romanov</title>
    <description>The latest articles on DEV Community by Vladimir Romanov (@vromanov).</description>
    <link>https://dev.to/vromanov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vromanov"/>
    <language>en</language>
    <item>
      <title>Serverless Architectures &amp; AWS Lambda Introduction</title>
      <dc:creator>Vladimir Romanov</dc:creator>
      <pubDate>Wed, 13 Mar 2024 13:07:00 +0000</pubDate>
      <link>https://dev.to/vromanov/serverless-architectures-aws-lambda-introduction-480</link>
      <guid>https://dev.to/vromanov/serverless-architectures-aws-lambda-introduction-480</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to AWS Lambda and Serverless Architectures
&lt;/h2&gt;

&lt;p&gt;Before we talk about AWS Lambda, it’s essential to understand the general principles of server-based and serverless architectures. In short, a serverless architecture will leverage short bursts of processing power as required by the load. In other words, the foundational difference between the two is that a server-based architecture will have a “permanent” infrastructure in place that will wait for requests to come in. In contrast, a serverless architecture will only spin up a “server” when it has to process a request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdktgo4mx4occljcij6q.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdktgo4mx4occljcij6q.gif" alt="Figure 1 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Fundamental Diagram of Lambda Functions in AWS" width="800" height="452"&gt;&lt;/a&gt;&lt;em&gt;Figure 1 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Fundamental Diagram of Lambda Functions in AWS&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As shown above, AWS services integrate well with what we’ll be covering in this tutorial — Lambda Functions. Lambda Functions run on a server, but they don’t require developers and software engineers to maintain infrastructure. In other words, they can focus on building applications without the hassle of maintaining what their code runs on.&lt;/p&gt;

&lt;p&gt;AWS started off as a service for engineers to easily deploy applications via various services that created obstruction from hardware. Engineers could allocate hardware resources as needed. The serverless movement was pioneered by AWS as an effort to keep making it easier to deploy code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Services on AWS
&lt;/h2&gt;

&lt;p&gt;Here’s what’s available on “serverless” deployments on AWS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;DynamoDB&lt;/li&gt;
&lt;li&gt;AWS Cognito&lt;/li&gt;
&lt;li&gt;Amazon S3&lt;/li&gt;
&lt;li&gt;AWS API Gateway&lt;/li&gt;
&lt;li&gt;Fargate&lt;/li&gt;
&lt;li&gt;Step Functions&lt;/li&gt;
&lt;li&gt;AWS Kinesis Data Firehose&lt;/li&gt;
&lt;li&gt;AWS SNS&lt;/li&gt;
&lt;li&gt;AWS SQS&lt;/li&gt;
&lt;li&gt;Aurora Serverless&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding AWS Lambda
&lt;/h2&gt;

&lt;p&gt;In traditional server architectures, including AWS, you’re almost always paying for hardware upkeep. You’re either continuously running on-prem servers that consume power and appreciate over time, or you’re paying a fee to have EC2 instances running in the cloud. AWS Lambda offers a way to run functions on demand. In other words, you’re saving yourself the cost of continuously running a server and are relying on sending out a request when it is issued within your system.&lt;/p&gt;

&lt;p&gt;Although you may assume that the choice between using servers or serverless architectures is black or white, it’s a bit more nuanced than that. As an AWS or any cloud provider user, you’re paying a fee to run your queries at a moment’s notice. In other words, serverless architectures do have some drawbacks. We’ll examine some advantages and disadvantages in the sections below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of AWS Lambda
&lt;/h3&gt;

&lt;p&gt;Straightforward pricing — Lambda functions are priced per request and compute time. Simpler queries will result in lower pricing than complex computations. The free tier plan, for learning purposes, includes 1,000,000 Lambda requests and 400,000 GBs of compute time.&lt;/p&gt;

&lt;p&gt;Lambda functions are compatible and integrate with most AWS services. They’re also compatible with most modern programming languages.&lt;/p&gt;

&lt;p&gt;Lambda functions can be easily monitored and fine-tuned via various external and AWS-provided services, such as AWS CloudWatch.&lt;/p&gt;

&lt;p&gt;Lambda functions scale easily — As mentioned above, the goal is to have an API that will compute anything thrown at it — higher amounts and higher complexity. Keep in mind that pricing will scale accordingly.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Lambda Supported Languages
&lt;/h3&gt;

&lt;p&gt;Lambda supports various languages and platforms; here’s the list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;Golang&lt;/li&gt;
&lt;li&gt;Java&lt;/li&gt;
&lt;li&gt;C#&lt;/li&gt;
&lt;li&gt;Ruby&lt;/li&gt;
&lt;li&gt;Rust&lt;/li&gt;
&lt;li&gt;Node.js (JavaScript)&lt;/li&gt;
&lt;li&gt;Lambda Container Image | ECS / Fargate&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Lambda Hands-On Creation Walkthrough
&lt;/h2&gt;

&lt;p&gt;It’s time for us to get practical when it comes to AWS Lambda. In this section, you’ll find a walkthrough of getting started with AWS Lambda on an actual AWS account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 — Navigate to the Lambda Console and Create a New Function
&lt;/h3&gt;

&lt;p&gt;1.1 — From the AWS Console, search for “lambda.”&lt;br&gt;
1.2 — From the drop-down menu, click on “Lambda.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foign69shhkmswtjhy0qo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foign69shhkmswtjhy0qo.png" alt="Figure 2 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Navigate to the Lambda Console and Create a New Function" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 2 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Navigate to the Lambda Console and Create a New Function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;1.3 — From the main page, click on “Create a function.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlptisr74hnv70nbcv7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlptisr74hnv70nbcv7s.png" alt="Figure 3 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Creating a New Lambda Function on AWS" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 3 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Creating a New Lambda Function on AWS&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 — Specify the Parameters of the Lambda Function
&lt;/h3&gt;

&lt;p&gt;2.1 — We will be creating a “Hello World” function. For that purpose, we can select from a list of blueprints pre-created in AWS. Click on “Use a blueprint.”&lt;/p&gt;

&lt;p&gt;2.2 — You can choose from a variety of blueprints available; in this demo, we’re going to use the python v3.10 blueprint called “Hello world function.”&lt;/p&gt;

&lt;p&gt;2.3 — Give your function a unique name; we’ve called ours “kerno-lambda-ex1.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z55gi6e8hp7gvkaolvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z55gi6e8hp7gvkaolvf.png" alt="Figure 4 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Specifying the Parameters of the Lambda Function" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 4 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Specifying the Parameters of the Lambda Function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;2.4 — At the bottom of the specifications page, you’ll be presented with the function you’re going to be running in AWS Lambda. You’ll notice that predefined functions can’t be changed in this section. However, if you do specify your own lambda function, you’ll be able to preview and change the code here.&lt;/p&gt;

&lt;p&gt;2.5 — Click on “Create function.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frflcpszlujnefptx3ojf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frflcpszlujnefptx3ojf.png" alt="Figure 5 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Viewing the Function Implementation" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 5 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Viewing the Function Implementation&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Lambda Hands-On Testing Walkthrough
&lt;/h2&gt;

&lt;p&gt;At this point, we have the demo function deployed in our AWS environment. However, a Lambda function will only execute based on a trigger. We can simulate a trigger via the “Test” function to see what the response of our function is going to be based on the inputs provided. In this section, we’re going to run a test to see what happens.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 — Locate the Code Source and Start Testing
&lt;/h3&gt;

&lt;p&gt;1.1 — From the AWS Lambda Dashboard, go to “Functions.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eba8aspdqr0jptknz8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eba8aspdqr0jptknz8b.png" alt="Figure 6 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Locate the Code Source and Start Testing" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 6 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Locate the Code Source and Start Testing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;1.2 — From the list of functions, select the one you’d like to work with. In this case, we’re selecting the function we created in the previous section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwwx925lkqbmhz91vzxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwwx925lkqbmhz91vzxj.png" alt="Figure 7 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Locate the Code Source and Start Testing" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 7 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Locate the Code Source and Start Testing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;1.3 — Scroll down until you see the “Code” section, which displays the entire Lambda function.&lt;/p&gt;

&lt;p&gt;1.4 — Click on the “Test” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61m3jjdtiaxvxwa1e93f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61m3jjdtiaxvxwa1e93f.png" alt="Figure 8 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Locate the Code Source and Start Testing" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 8 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Locate the Code Source and Start Testing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At this point, we need to specify the parameters of the test event.&lt;/p&gt;

&lt;p&gt;1.5 — We don’t have any events in the current environment, so the only option is to select “Create new event.”&lt;/p&gt;

&lt;p&gt;1.6 — Give your event a unique name; we’re using “Kerno-test-ev1.”&lt;/p&gt;

&lt;p&gt;1.7 — Notice that the event will specify JSON that will be used as input into the Lambda function. In our specific case, we’re feeding three key-value pairs into the hello function executable; you don’t need to change the default values, but it’s something you need to pay attention to for custom functions.&lt;/p&gt;

&lt;p&gt;1.8 — Press on “Save.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l6arhv66dwpkf63z5oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l6arhv66dwpkf63z5oy.png" alt="Figure 9 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Configuring the Parameters of the Lambda Function" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 9 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Configuring the Parameters of the Lambda Function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that we have a trigger, all we have to do is click on “Test” to see the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cexkahychll401k7xs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cexkahychll401k7xs6.png" alt="Figure 10 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Execution Results for the Test Function" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 10 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Execution Results for the Test Function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’ll notice that under the “Execution results” tab, you’ll be presented with a variety of indicators of how the function was performed. In addition to technical details needed to run your code in a production environment, you’ll be presented with details around billing. In other words, you can review what it will cost you to run this function in your environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring AWS Lambda Parameters Walkthrough
&lt;/h2&gt;

&lt;p&gt;Although our test Lambda function ran without any issues, it’s good practice to create guardrails that would prevent the function from taking up more resources than anticipated and costing the business more than it should. In this section, we’ll take a look at a few options that will help you limit the potential usage of your Lambda functions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 — Access AWS Lambda Function Settings
&lt;/h3&gt;

&lt;p&gt;1.1 — Open the “Configuration” tab under a specific function.&lt;/p&gt;

&lt;p&gt;1.2 — Press on the “Edit” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9lo9iauyy5lqsnrsjpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9lo9iauyy5lqsnrsjpp.png" alt="Figure 11 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Configuration of Guardrail Parameters for a Lambda Function" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 11 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Configuration of Guardrail Parameters for a Lambda Function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzvtktcq3o1xqgaq40uc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzvtktcq3o1xqgaq40uc.png" alt="Figure 12 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Configuration of Memory, Ephemeral Storage, and Timeout Parameters" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 12 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Configuration of Memory, Ephemeral Storage, and Timeout Parameters&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’ll find three key parameters you should be familiar with and configure for every function:&lt;/p&gt;

&lt;h4&gt;
  
  
  Memory
&lt;/h4&gt;

&lt;p&gt;The key factor influencing the performance of Lambda functions lies in memory allocation, ranging from 128 MB to 10,240 MB, with the default often set to the smallest capacity. While 128 MB is suitable for basic functions like event transformation and routing, more complex tasks involving libraries, Lambda layers, or data from S3 or EFS benefit from higher memory settings. Notably, memory allocation also dictates the virtual CPU available to a function, offering increased computational power as memory is augmented. Adjusting memory becomes particularly impactful when a function faces constraints in CPU, network, or memory. Despite the Lambda service charging based on gigabyte-seconds, where cost equals memory (in gigabytes) multiplied by duration (in seconds), the overall expense can be influenced by the interplay of these factors. Increasing memory may lead to decreased duration, potentially offsetting or even reducing the overall cost, making it a strategic consideration in optimizing Lambda function performance and cost efficiency.&lt;/p&gt;

&lt;p&gt;Here’s a pricing sample for 1000 invocations of a function that computes prime numbers from the AWS documentation along with the cost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory 128MB | Duration 11.722 sec | Cost $0.024628&lt;/li&gt;
&lt;li&gt;Memory 512MB | Duration 6.678 sec | Cost $0.028035&lt;/li&gt;
&lt;li&gt;Memory 1024MB | Duration 3.194 sec | Cost $0.026830&lt;/li&gt;
&lt;li&gt;Memory 1536MB | Duration 1.465 sec | Cost $0.024638&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Ephemeral Storage
&lt;/h4&gt;

&lt;p&gt;Ephemeral storage in Lambda functions provides a temporary, local storage solution that is available only during the execution of a function, with no persistence between invocations. Lambda functions are designed to be stateless, relying on external services for data storage. The ephemeral storage proves beneficial for scenarios requiring temporary file storage, caching of intermediate results, or as a scratch space for complex computations within a function. For instance, it allows Lambda functions to efficiently process and manipulate files or store frequently accessed data during their execution. However, it’s crucial to recognize that data stored in ephemeral storage is specific to the current invocation and does not endure across different function runs. For persistent data storage, external services like databases or cloud storage are more appropriate in the serverless paradigm.&lt;/p&gt;

&lt;h4&gt;
  
  
  Timeout
&lt;/h4&gt;

&lt;p&gt;As we’ve mentioned multiple times, you’ll be charged based on the execution time of your function. It’s not uncommon that coders deploy functions that run into problems, fail to recover and hang the underlying service. Therefore, it’s important to create reasonable timeouts for your functions with 15 minutes being the maximum. Keep in mind that you can monitor what’s going on with your Lambda functions using a variety of tools; you can always increase the Timeout limit if you see execution of functions fail for this reason.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring AWS Lambda Functions
&lt;/h3&gt;

&lt;p&gt;AWS Provides a convenient way to see what is happening with a specific Lambda function. After deploying a specific function, you can navigate into the “Monitor” tab and see the number of times the function has been ran, how long it took, what were some of the exceptions (if any), etc.&lt;/p&gt;

&lt;p&gt;Note that you can choose to view this information in CloudWatch, including logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb2sffye9aqc9obec907.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb2sffye9aqc9obec907.png" alt="Figure 13 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Monitoring Lambda Function Execution via Dashboard" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 13 — Serverless Architectures &amp;amp; AWS Lambda Introduction | Monitoring Lambda Function Execution via Dashboard&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion on AWS Lambda Functions
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we’ve covered various topics, from serverless architectures to deploying, monitoring, and configuring AWS Lambda functions. In conclusion, this comprehensive exploration of AWS Lambda and serverless architectures has provided a valuable roadmap for both beginners and experienced developers. Starting with an insightful introduction to AWS Lambda and the fundamental concepts of serverless architectures, we delved into the underlying principles and mechanisms. The hands-on creation walkthrough offered a step-by-step guide, empowering users to set up their Lambda functions with confidence. Subsequently, the article shifted focus to the critical aspect of understanding AWS Lambda, shedding light on its configuration parameters. The hands-on testing walkthrough then exemplified the practical application of these configurations, ensuring robust and efficient Lambda functions. Together, these sections form a cohesive journey, demystifying AWS Lambda and equipping readers with the knowledge and skills needed to navigate the dynamic landscape of serverless computing. Whether embarking on the initial stages of serverless exploration or refining existing expertise, this article serves as a valuable resource for harnessing the full potential of AWS Lambda in modern application development.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Understanding and Creating ReplicaSets in Kubernetes</title>
      <dc:creator>Vladimir Romanov</dc:creator>
      <pubDate>Thu, 22 Feb 2024 02:23:21 +0000</pubDate>
      <link>https://dev.to/vromanov/understanding-and-creating-replicasets-in-kubernetes-550g</link>
      <guid>https://dev.to/vromanov/understanding-and-creating-replicasets-in-kubernetes-550g</guid>
      <description>&lt;p&gt;Pods are the fundamental building blocks of Kubernetes. They’re used to deploy applications and services that interact with each other and the outside world / users. In the last two tutorials, we’ve deployed a pod and a service. The pod runs an Angular application with a basic UI. The service, using NodePort, allows outside users to be directed to the pod and view the application via a web browser.&lt;/p&gt;

&lt;p&gt;Sounds simple enough… why do we need ReplicaSets?&lt;/p&gt;

&lt;p&gt;As we’ve mentioned multiple times, pods should be considered to have “finite life.” They’re meant to be deployed, used, and discarded when a problem occurs. These issues can range from code locks to malicious attacks from the outside world.&lt;/p&gt;

&lt;p&gt;Regardless of the nature of the issue, Pods should be considered impermanent.&lt;/p&gt;

&lt;p&gt;ReplicaSets are used to manage groups of Pods. A ReplicaSet can contain anywhere from 1 to “infinity” of Pods. Obviously, the upper limit will most likely be dictated by the resources available to your Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kerno.io/learn/implementing-and-working-with-kubernetes-services"&gt;Implementing and Working With Kubernetes Services&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kerno.io/learn/deploying-your-first-pod-in-kubernetes"&gt;Deploying Your First Pod in Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A local &lt;a href="https://www.kerno.io/learn/how-to-install-kubernetes"&gt;installation of Kubernetes&lt;/a&gt;. You may certainly use a cloud deployment, but you may run into additional challenges when it comes to accessing those pods, services, etc. Refer to the cloud provider documentation if you’re going that route.&lt;/li&gt;
&lt;li&gt;An installation of a terminal software. On Linux and MacOS, you’ll find these out of the box; for Windows, you can use PowerShell or the Terminal feature in &lt;a href="https://code.visualstudio.com/"&gt;VSCode&lt;/a&gt;, which we will be using here.&lt;/li&gt;
&lt;li&gt;A basic understanding of command line / Terminal instructions. We’ll cover what you need, but it’s important for you to understand how to navigate files, how to see what’s running in your cluster (for troubleshooting purposes), etc.&lt;/li&gt;
&lt;li&gt;A basic understanding of YAML Files. We've written a tutorial on this topic in case you need to get up to speed - &lt;a href="https://www.kerno.io/learn/yaml-file-format-complete-guide"&gt;YAML File Format&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A basic understanding of &lt;a href="https://www.kerno.io/learn/kubernetes-services"&gt;Kubernetes Services&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding ReplicaSets in Kubernetes
&lt;/h2&gt;

&lt;p&gt;If you’ve deployed a pod directly into your cluster (as we previously discussed), you’re doing it wrong! You should never create and deploy single pods into a Kubernetes environment. Instead, you should be using ReplicaSets and Deployments. We’ll discuss Deployments in a future tutorial.&lt;/p&gt;

&lt;p&gt;What’s the problem with deploying individual pods?&lt;/p&gt;

&lt;p&gt;As mentioned above, Pods tend to fail. If you were to create a single pod that runs your application and it encountered an issue, your application would go down. For obvious reasons, we don’t want that. Kubernetes thus provides a way to “manage” pods and their availability via ReplicaSets.&lt;/p&gt;

&lt;p&gt;You may also run into an issue by having a single Pod within a ReplicaSet. When the pod fails, your users won’t be able to access the application while the ReplicaSet recreates the new pod. The time to create a new pod will vary based on the software you’re running, the services that depend on it, and the time the applications within the Pod are ready to service the end users. The solution to this problem is to have multiple pods running at the same time via a ReplicaSet. Should one of the pods fail, the traffic will be redirected to the “Running” instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Our First ReplicaSet in Kubernetes
&lt;/h2&gt;

&lt;p&gt;In the tutorial on Pods, we created and deployed a Pod into our cluster. We will use that template and “wrap” it into what a ReplicaSet is. Note that as you get more comfortable using Kubernetes, you’ll simply deploy everything via ReplicaSets or Deployments; there’s no need to go through the first step and specify a single Pod.&lt;/p&gt;

&lt;p&gt;Below is the code of the new ReplicaSet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstset&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2&lt;/span&gt;  
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;      
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;  
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;      
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;        
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;    
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;      
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;      
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;        
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;richardchesterwood/k8s-fleetman-webapp-angular:release0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may notice a few familiar lines! - Under the “template” section, all we’re using is the code we specified for the Pod. Per the Kubernetes documentation, this section is reserved for Pod specification; we can re-use the code we had previously written.&lt;/p&gt;

&lt;p&gt;You may have also noticed that we changed the “kind: Pod” key-value pair to “kind: ReplicaSet” for obvious reasons.&lt;/p&gt;

&lt;p&gt;Lastly, we’ve specified a quantity of “replicas” to be equal to 2. You can leave it at 1, but as explained above, we want to have a failover pod should the first one fail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying Our First ReplicaSet in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Before we deploy our new file, let’s do some basic clean-up on our cluster. Chances are, you still have a pod running from the previous tutorial. You can certainly deploy the ReplicaSet at this point, but it may become confusing to understand what’s happening inside the cluster as we start removing and resting some of the features.&lt;/p&gt;

&lt;p&gt;First, run the following command to see what’s currently running on your cloister:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can delete a single pod in Kubernetes by issuing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod &lt;span class="s2"&gt;"pod_name"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’ve created more than one pods, or if you’d like to clear all of them from your cluster, you can issue the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pods &lt;span class="nt"&gt;--all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, you should be left with the service that we deployed last time. Here’s a snapshot of where we are at with our cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98lnvo7vz8q22dssqo09.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98lnvo7vz8q22dssqo09.PNG" alt="Figure 1 - Kubernetes ReplicaSet | Viewing Assets in the Kubernetes Cluster" width="778" height="89"&gt;&lt;/a&gt;&lt;em&gt;Figure 1 - Kubernetes ReplicaSet | Viewing Assets in the Kubernetes Cluster&lt;/em&gt;&lt;br&gt;
To deploy the ReplicaSet, you’ll need to issue the exact same command as before!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; .&lt;span class="se"&gt;\m&lt;/span&gt;ypods.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that we’ve changed the name to reflect that this will deploy multiple pods instead of one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w1ijbarjevyus6632ua.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w1ijbarjevyus6632ua.gif" alt="Figure 2 - Kubernetes ReplicaSet | Adding a ReplicaSet into the Cluster" width="1910" height="983"&gt;&lt;/a&gt;&lt;em&gt;Figure 2 - Kubernetes ReplicaSet | Adding a ReplicaSet into the Cluster&lt;/em&gt;&lt;br&gt;
After we run the “kubectl get all” command after the ReplicaSet deployment, you’ll notice that we have 2 pods in our cluster - “myfirstset-2l5ss” and “myfirstset-mmc26.” This is what we’d expect to see as our ReplicaSet calls out for 2 replicas of pods called “myfirstset.” The ReplicaSet will automatically append a suffix to the name so that the user can differentiate between different pods.&lt;/p&gt;

&lt;p&gt;If you attempt to access the application at this point, you’ll get an error message from your browser. We’ve changed the path our traffic flows; the service we previously created must point to the ReplicaSet and not the pod. To fix this issue, we need to make two changes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ReplicaSet - We’ve already added a selector with a key-value pair labeled as “myfirstapp.”&lt;/li&gt;
&lt;li&gt;Service - We need to point the service to the ReplicaSet; just as before, this can be done via the label key-value pair - “app: myfirstapp”
Make sure that the two of them match; redeploy the files if necessary.
At this point, the application should run as before.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Testing Our ReplicaSet in Kubernetes
&lt;/h2&gt;

&lt;p&gt;We now have a ReplicaSet with 2 pods running on the cluster. What will happen when we delete one or both of those pods? Here’s a set of instructions you can run to see the “functionality” of a ReplicaSet within your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod “pod_name”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsicavdnszz2gtncn022.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsicavdnszz2gtncn022.gif" alt="Figure 3 - Kubernetes ReplicaSet | Deleting Pods and Testing ReplicaSets in K8S" width="1910" height="983"&gt;&lt;/a&gt;&lt;em&gt;Figure 3 - Kubernetes ReplicaSet | Deleting Pods and Testing ReplicaSets in K8S&lt;/em&gt;&lt;br&gt;
As shown above, at the beginning, we have two pods. We then proceed to delete one of the pods. Immediately after, we issue the command to see the assets once again, and we can immediately notice that there are 2 pods once again. Notice that the new pod received a new suffix.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion on ReplicaSets in Kubernetes
&lt;/h2&gt;

&lt;p&gt;We’ve converted a single pod specification we had written in the previous tutorial into a ReplicaSet. We’ve scaled the number of replicas to 2 and we deployed the new file into Kubernetes. We’ve verified that once a pod is deleted, it’s swiftly re-created by the ReplicaSet engine we just created.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>bash</category>
      <category>yaml</category>
    </item>
    <item>
      <title>Implementing and Working With Kubernetes Services</title>
      <dc:creator>Vladimir Romanov</dc:creator>
      <pubDate>Fri, 16 Feb 2024 00:14:34 +0000</pubDate>
      <link>https://dev.to/vromanov/implementing-and-working-with-kubernetes-services-1nc6</link>
      <guid>https://dev.to/vromanov/implementing-and-working-with-kubernetes-services-1nc6</guid>
      <description>&lt;p&gt;In the previous tutorial on Kubernetes, we’ve learned what it takes to deploy a pod that runs a simple Angular application. We’ve also learned how to access this pod via command line / Terminal and to understand the structure / layout of those services.&lt;br&gt;
In this tutorial, our goal is to build on the previous concepts by implementing the Kubernetes services that expose the application to external users. Remember that we were able to see the contents of the pod via the “exec” command, but we couldn’t see anything displayed when we navigated to the IP address of the service in our browser.&lt;/p&gt;
&lt;h2&gt;
  
  
  Key References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kerno.io/learn/deploying-your-first-pod-in-kubernetes"&gt;Deploying Your First Pod in Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Local &lt;a href="https://www.kerno.io/learn/how-to-install-kubernetes"&gt;installation of Kubernetes&lt;/a&gt;. You may certainly use a cloud deployment, but you may run into additional challenges when it comes to accessing those pods, services, etc. Refer to the cloud provider documentation if you’re going that route.&lt;/li&gt;
&lt;li&gt;An installation of a terminal software. On Linux and MacOS, you’ll find these out of the box; for Windows, you can use PowerShell or the Terminal feature in &lt;a href="https://code.visualstudio.com/"&gt;VSCode&lt;/a&gt;, which we will be using here.&lt;/li&gt;
&lt;li&gt;A basic understanding of command line / Terminal instructions. We’ll cover what you need, but it’s important for you to understand how to navigate files, how to see what’s running in your cluster (for troubleshooting purposes), etc.&lt;/li&gt;
&lt;li&gt;A basic understanding of YAML Files. We've written a tutorial on this topic in case you need to get up to speed - &lt;a href="https://www.kerno.io/learn/yaml-file-format-complete-guide"&gt;YAML File Format&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A basic understanding of Kubernetes Services.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Kubernetes Services and Pods
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, pods are blocks that run applications. The architecture of Kubernetes is such that pods can be scaled up and down without impact on the overall performance of the cluster. In other words, the Kubernetes engine will manage the lifecycle of a pod and ensure that it is properly created and removed. With that in mind, we can appreciate the fact that it becomes challenging to direct traffic to and from specific pods - they’re simply going to get different IP addresses as they’re removed and created by the engine. There must be a better approach to this! Kubernetes Services are what makes that possible - they’re essentially a gatekeeper of pods and ensure that the traffic flow is directed to the right destination.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kubernetes Services Implementation
&lt;/h2&gt;

&lt;p&gt;Our main objective is to access the application that we’ve deployed in the previous tutorial from the web browser. To accomplish this task, we need to create a service that will expose the port of the application to a port of the minikube instance. Here’s the code we’re going to create in a new YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myappservice&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;  
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;      
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;      
    &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30100&lt;/span&gt;  
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Line 1 - apiVersion: v1
&lt;/h3&gt;

&lt;p&gt;Kubernetes uses a specification of OpenAPI and currently has 3 major revisions. For our simple application, the older revision is going to be just fine; you can see what has been changed and which features are available in the latest versions on the official Kubernetes specification.&lt;/p&gt;

&lt;h3&gt;
  
  
  Line 2 - kind: Service
&lt;/h3&gt;

&lt;p&gt;As we’ve discussed in the Pod deployment tutorial, you must specify the kind of deployment you’re looking to do in Kubernetes. Based on the kind keyword, the Kubernetes engine will know what element to create.&lt;/p&gt;

&lt;h3&gt;
  
  
  Line 3, 4 - metadata:, name: myappservice
&lt;/h3&gt;

&lt;p&gt;Every object can be given a name in Kubernetes. The goal, as we’re going to see shortly is to create references to other objects based on their name. In this tutorial, the service will be referencing the pod by name which we will create at the Pod level.&lt;/p&gt;

&lt;h3&gt;
  
  
  Line 4, 5, 6 - spec:, selector:, app: myfirstapp
&lt;/h3&gt;

&lt;p&gt;This is where the unique ID of the element comes into play. We need to point the service to the pod we’re looking to impact. By adding the “app: myfirstapp” value-key pair to the selector element, the Kubernetes engine will make the link between the service and the associated pod.&lt;/p&gt;

&lt;h3&gt;
  
  
  Line 7, 8, 9, 10 - ports:, - port: 80, name: http, nodePort: 30100
&lt;/h3&gt;

&lt;p&gt;The service will open up a port on minikube and “link” it back to the pod we have specified. Notce that we’re also mentioning port 80 which is the port on which the pod is serving the angular application. If you’re confused about that, refer to the previous tutorial in which we deployed this pod.&lt;/p&gt;

&lt;h3&gt;
  
  
  Line 11 - type: NodePort
&lt;/h3&gt;

&lt;p&gt;There are different types of services in Kubernetes. In this case, the simplest approach is to use NodePort. We’ve thus specified it in the type key-value pair.&lt;/p&gt;

&lt;h2&gt;
  
  
  Specifying the Kubernetes Pod Name
&lt;/h2&gt;

&lt;p&gt;At this point, we’re ready to deploy the service YAML file. However, even if we were to deploy this implementation, we wouldn’t be able to reach the Pod.&lt;/p&gt;

&lt;p&gt;Why?&lt;/p&gt;

&lt;p&gt;Because we haven’t specified the name within the implementation of the Pod. Here’s the revised file of our previous Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;  
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;    
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;richardchesterwood/k8s-fleetman-webapp-angular:release0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that we have not only the name but also the label key-value pair which is used as a reference. In Kubernetes, every element must have a name, but the labels aren’t required.&lt;/p&gt;

&lt;p&gt;Kubernetes Services Deployment&lt;br&gt;
In the previous tutorial, we deployed the Pod. The first step here is to re-deploy the pod to Kubernetes so that the label we’ve added is applied. In Kubernetes, when you re-issue the following command, the Pod will be updated instead of re-deployed again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f .\myfirstpod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that you’ll receive a confirmation message that will confirm that the changes have been deployed. If you make no changes to the same file and re-issue the command above, you’ll receive a message stating that no changes have been made.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzs5d2tozhvrz84n4gg1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzs5d2tozhvrz84n4gg1.PNG" alt="Figure 1 - Deploying Kubernetes Services | Deploying a Service YAML Configuration File onto Kubernetes" width="800" height="88"&gt;&lt;/a&gt;&lt;em&gt;Figure 1 - Deploying Kubernetes Services | Deploying a Service YAML Configuration File onto Kubernetes&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We deploy the service we’ve specified with the same command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f .\myfirstapp-serv.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we’ve seen before, Kubernetes will deploy this service in a few moments; issue the following command to see it listed in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Viewing the Pod Application Through the Kubernetes Service&lt;br&gt;
At this point, we’ve deployed everything we need to view the application through the browser. However, we still need to understand where the application is being served. As previously configured, we have a NodePort specified at the address of 30100. What’s the IP address? Issue the following command to see the IP address of minikube:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll notice that this IP is different from what we’ve previously seen under the items listed by running the “kubectl get all command.” Those IP addresses are internal to the cluster.&lt;/p&gt;

&lt;p&gt;If you’re running on Linux without Docker Desktop, you can navigate to that IP address in your browser, add the port, and you should be able to see the application. I’m running Docker Desktop with minikube which requires me to issue an additional command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube service myappservice
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the command executes, minikube will open the appropriate port / IP address to service the application in a browser as shown below.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frelide7pq5l7pfpzmvsn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frelide7pq5l7pfpzmvsn.gif" alt="Figure 2 - Deploying Kubernetes Services | Viewing the Application through a Browser" width="1910" height="983"&gt;&lt;/a&gt;&lt;em&gt;Figure 2 - Deploying Kubernetes Services | Viewing the Application through a Browser&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion on Deploying Kubernetes Services
&lt;/h2&gt;

&lt;p&gt;At this point, we’ve created a pod that is running a container with an application using Angular. We’ve also created a service that exposes the user traffic from a web browser into our web app inside the cluster. We can view the application running within our pod as “a regular user.”&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Deploying Your First Pod in Kubernetes</title>
      <dc:creator>Vladimir Romanov</dc:creator>
      <pubDate>Tue, 13 Feb 2024 23:38:36 +0000</pubDate>
      <link>https://dev.to/vromanov/deploying-your-first-pod-in-kubernetes-2j3j</link>
      <guid>https://dev.to/vromanov/deploying-your-first-pod-in-kubernetes-2j3j</guid>
      <description>&lt;p&gt;Our goal for this tutorial is to deploy our first pod within Kubernetes. We’re going to cover every step on how to write up the YAML file for deployment, which commands to issue to see the current state of our environment and the pod, and how to see the contents of what’s inside of our pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Local &lt;a href="https://www.kerno.io/learn/how-to-install-kubernetes"&gt;installation of Kubernetes&lt;/a&gt;. You may certainly use a cloud deployment, but you may run into additional challenges when it comes to accessing those pods, services, etc. Refer to the cloud provider documentation if you’re going that route.&lt;/li&gt;
&lt;li&gt;An installation of a terminal software. On Linux and MacOS, you’ll find these out of the box; for Windows, you can use PowerShell or the Terminal feature in &lt;a href="https://code.visualstudio.com/"&gt;VSCode&lt;/a&gt;, which we will be using here.&lt;/li&gt;
&lt;li&gt;A basic understanding of command line / Terminal instructions. We’ll cover what you need, but it’s important for you to understand how to navigate files, how to see what’s running in your cluster (for troubleshooting purposes), etc.&lt;/li&gt;
&lt;li&gt;A basic understanding of YAML Files. We’ve written a tutorial on this topic in case you need to get up to speed — &lt;a href="https://www.kerno.io/learn/yaml-file-format-complete-guide"&gt;YAML File Format&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started — Kubernetes Pod Fundamentals
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, pods can run a variety of applications or microservices. In this tutorial, our focus isn’t going to be on what goes inside of the pod, as it will be on how to deploy a basic application into a pod. Based on that, we can utilize an example application that has been created by someone else and made public on DockerHub. If you’re unfamiliar with DockerHub, it’s a service allowing you and other users to upload their images and use them directly in your deployments. This is a separate topic as DockerHub comes with a lot of functionality beyond what we’ll be covering here. That being said, let’s take a look at the page of the service we’ll be deploying. To follow along, navigate to &lt;a href="https://hub.docker.com/r/richardchesterwood/k8s-fleetman-webapp-angular"&gt;https://hub.docker.com/r/richardchesterwood/k8s-fleetman-webapp-angular&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sivdj9t1kktx353kc8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sivdj9t1kktx353kc8r.png" alt="Figure 1 — Deploying Your First Pod in Kubernetes | Kubernetes Pod Fundamentals&amp;lt;br&amp;gt;
" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 1 — Deploying Your First Pod in Kubernetes | Kubernetes Pod Fundamentals&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As you may have noticed in the Overview section of the page, it’s an Angular-based application that servers a basic front-end to the user via nginx. Without getting too deep into the details, what we’re trying to accomplish here is deploying this code and seeing the HTML code that is delivered from the services that run on the pod in Kubernetes.&lt;/p&gt;

&lt;p&gt;Before we deploy anything, let’s ensure that we’re all on the same page.&lt;/p&gt;

&lt;p&gt;Step 1 — Make sure that your minikube service is running by issuing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jecrogcv01squzzzmxu.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jecrogcv01squzzzmxu.PNG" alt="Figure 2 — Deploying Your First Pod in Kubernetes | Kubernetes Minikube Status&amp;lt;br&amp;gt;
" width="800" height="279"&gt;&lt;/a&gt;&lt;em&gt;Figure 2 — Deploying Your First Pod in Kubernetes | Kubernetes Minikube Status&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Note that the response you’re likely to receive from your Minikube instance differs from what you see above. We’ve got a deployment of 1 Control Plane container with 2 Worker containers running. That being said, you should be all set if you have a variation of “Running” indicators.&lt;/p&gt;
&lt;h2&gt;
  
  
  Getting Started — Setting Up a YAML File for a Kubernetes Pod
&lt;/h2&gt;

&lt;p&gt;We’re now ready to build the file to allow us to deploy a pod onto the cluster. Assets that run within Kubernetes are specified via YAML files. We’ve released a long guide on how to work with YAML if you’re unfamiliar with it — Writing and Using YAML Files. If you’re using VSCode, open a folder in which you’d like to store these files and issue the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;code myfirstpod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should have a new blank file in your directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Specify Pods for Kubernetes?
&lt;/h3&gt;

&lt;p&gt;An empty file called “myfirstpod” isn’t going to tell our cluster to deploy a pod. We need to write out the specifications as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myfirstapp&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;richardchesterwood/k8s-fleetman-webapp-angular:release0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s understand what the code above does!&lt;/p&gt;

&lt;h4&gt;
  
  
  Line 1 — apiVersion: v1
&lt;/h4&gt;

&lt;p&gt;Kubernetes uses a specification of OpenAPI and currently has 3 major revisions. For our simple application, the older revision is going to be just fine; you can see what has been changed and which features are available in the latest versions on the official Kubernetes specification.&lt;/p&gt;

&lt;h4&gt;
  
  
  Line 2 — kind: Pod
&lt;/h4&gt;

&lt;p&gt;You can deploy a variety of different services and components on Kubernetes; in this case, we need to specify that the kind of deployment we’re looking to make is a Pod.&lt;/p&gt;

&lt;h4&gt;
  
  
  Line 3, 4 — metadata:, name: myfirstapp
&lt;/h4&gt;

&lt;p&gt;Each pod and component running on Kubernetes must be given a unique identifier. In this case, we’re going to call our pod “myfirstapp.” Note that you’ll need to refer to it as such in the following sections; ensure that you’re using the reference you create at this stage.&lt;/p&gt;

&lt;h4&gt;
  
  
  Line 5, 6, 7 — spec:, containers:, — name: myfirstapp
&lt;/h4&gt;

&lt;p&gt;We’re specifying that our app will live inside of a container under the same name as above.&lt;/p&gt;

&lt;h4&gt;
  
  
  Line 8 — image: richardchesterwood/k8s-fleetman-webapp-angular:release0
&lt;/h4&gt;

&lt;p&gt;On this line, we’re specifying where to pull the image for our pod from. Notice that we’ve also made the mention of “release0” which is the first version of this image. You can view all the releases, including the first one, by navigating into the “Tags” section as shown in the figure above. As briefly mentioned previously, DockerHub can keep track of different image versions and allow users to deploy whichever version they specify in their files. It’s common to see either a version or “latest” as the tag to pull whichever version is the latest. Remember that this could break other services if substantial changes have been made and no precautions have been taken to accommodate those changes before deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running a Pod in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Let’s start by verifying what’s currently running in our cluster by issuing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re on a new machine, you should only see the “service/kubernetes” service running. It’s responsible for the Kubernetes engine functions.&lt;br&gt;
To deploy our pod via the YAML file, we’ll issue the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; myfirstpod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that the command isn’t to create a pod. In Kubernetes, you issue commands to import a configuration file that is parsed by the engine, corresponding with the appropriate services that deploy what has been specified. Therefore, the command just “applies” the “-f” (file) that we’ve created!&lt;br&gt;
At this point, you can once again issue the “get all” command to see the pod's status. As mentioned above, it’s not going to happen instantaneously; a few steps will need to be completed before the pod status is set to running. In this case, the “longest” step will be downloading the files from the DockerHub location we’ve specified.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vufavd99odos5iyf4ac.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vufavd99odos5iyf4ac.gif" alt="Figure 3 — Deploying Your First Pod in Kubernetes | Deploying YAML File into Kubernetes&amp;lt;br&amp;gt;
" width="1549" height="550"&gt;&lt;/a&gt;&lt;em&gt;Figure 3 — Deploying Your First Pod in Kubernetes | Deploying YAML File into Kubernetes&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’ll notice that we have a new entry after we’ve added the YAML file to Kubernetes — pod/myfirstapp with a state of “Running.”&lt;br&gt;
Note — if you’re getting errors at this stage, it’s due to one of two things — 1. There’s a problem with your minikube instance (it’s either not running, doesn’t have enough resources, or is compatible with the commands you’re using (unlikely)). 2. There’s a syntax problem with your YAML file — ensure that you’ve indented the text as we’ve specified and the items are properly capitalized. For example, Kubernetes will throw an error if “Pod” is typed as “pod” or “pods.”&lt;/p&gt;
&lt;h2&gt;
  
  
  Connecting to a Pod in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Now that we’ve deployed the services for a web application into a pod, how can we view the contents? It turns out that this isn’t as easy of a process as one might expect. For security reasons, pods aren’t natively exposed to the outside world; the networking is done within the cluster.&lt;br&gt;
To get the IP address of your minikube cluster, you can issue the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If open your browser and navigate to the IP address of your cluster, you’ll notice that there’s nothing there. Although we’re not able to directly access the contents of our pod, we can take steps in figuring out what we need to do so. Let’s first issue a command that will provide us with additional information about the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe pod myfirstapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7eltabod8uyrizr0p3o.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7eltabod8uyrizr0p3o.PNG" alt="Figure 4 — Deploying Your First Pod in Kubernetes | K8S Pod Parameters and Events&amp;lt;br&amp;gt;
" width="800" height="442"&gt;&lt;/a&gt;&lt;em&gt;Figure 4 — Deploying Your First Pod in Kubernetes | K8S Pod Parameters and Events&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’ll find a few interesting items on the response you receive from running this instruction. At this point, what’s interesting is the set of states outlined at the bottom, under the Events section. Notice that the pod has been successfully deployed to the cluster (in this example it was deployed onto “minikube-m01”) and it was started soon afterwards.&lt;/p&gt;
&lt;h2&gt;
  
  
  Accessing a Pod in Kubernetes Via Terminal
&lt;/h2&gt;

&lt;p&gt;A container / pod is nothing more than an underlying “virtual machine.” It’s not quite a virtual machine but has many similar characteristics. In either case, it’s possible to issue Linux commands directed toward our pod by using the exec modifier. Here’s how we can view the directory of files inside the pod from the Terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;myfirstapp &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2txz3hxepdus34wkndn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2txz3hxepdus34wkndn.PNG" alt="Figure 5 — Deploying Your First Pod in Kubernetes | K8S Pod File Structure via Terminal&amp;lt;br&amp;gt;
" width="800" height="241"&gt;&lt;/a&gt;&lt;em&gt;Figure 5 — Deploying Your First Pod in Kubernetes | K8S Pod File Structure via Terminal&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We can also access the pod via SSH to see the contents, modify them, or to simply understand better what’s going on inside of our pod. Here’s the command for that purpose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nb"&gt;exec &lt;/span&gt;myfirstapp &lt;span class="nt"&gt;--&lt;/span&gt; sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’re inside of our pod, we can run a set of commands to view what is being served on localhost:80. Keep in mind that if you’ve used a different application for your deployment, these commands may not yield the same results. These are specific to the application running on port 80.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget http://localhost:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will download the html file that is being presented at that specific IP/port.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will output the text from an html file.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvmf294sba06stosvmqw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvmf294sba06stosvmqw.gif" alt="Figure 6 — Deploying Your First Pod in Kubernetes | Reading HTML Contents Services Via LocalServer Port 80&amp;lt;br&amp;gt;
" width="1549" height="550"&gt;&lt;/a&gt;&lt;em&gt;Figure 6 — Deploying Your First Pod in Kubernetes | Reading HTML Contents Services Via LocalServer Port 80&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At this point, we haven’t quite accessed the application via a browser. However, we did access the pod via a text-based localhost:80 command and retrieved what was, in fact, being served via the application. We’ve displayed the contents of the HTML code on our terminal, which gives us confidence that the application is servicing the right information to a user that would be connecting to the pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion on Running Pods in Kubernetes
&lt;/h2&gt;

&lt;p&gt;As you’ve experienced in this tutorial, it’s quite easy to get going in running pods on your local Kubernetes cluster. The key is to verify the state of the cluster, create a valid YAML file that specifies the contents of the pod, deploy the YAML file to the Kubernetes engine, and be able to navigate the contents of a pod within the cluster using a few different commands. If you have any questions about the process or believe that there’s a better way to get this done, don’t hesitate to reach out!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Services</title>
      <dc:creator>Vladimir Romanov</dc:creator>
      <pubDate>Fri, 09 Feb 2024 14:21:16 +0000</pubDate>
      <link>https://dev.to/vromanov/kubernetes-services-1bj</link>
      <guid>https://dev.to/vromanov/kubernetes-services-1bj</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to Kubernetes Services
&lt;/h2&gt;

&lt;p&gt;A Kubernetes Service ties pods and nodes together by providing the “plumbing” within a cluster. Consider a set of nodes that contain different pods with applications running within every pod. The nature of a pod is such that it can be deleted at any time; the term commonly used to describe this behavior is ephemeral. In traditional systems, you’d typically assign an IP address to every machine part of a larger system to send inputs and collect outputs. The pod is a virtual system with similar capabilities — it is assigned an IP address through which other hosts can exchange information. As you may have already guessed, the challenge is that a pod can “die” at any time. The solution to this problem is a service that captures the state of a pod, or multiple pods, and creates a barrier that maintains the same IP address that redirects the traffic to the pod that is working and away from the ones that are shut down. In other words, Kubernetes Services allow for IP assignment and load balancing for services that are scaled horizontally or vertically by the cluster.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39uxakjov5rvnw4el0g3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39uxakjov5rvnw4el0g3.gif" alt="Figure 1 — Kubernetes Services | K8S Services Fundamentals" width="1353" height="767"&gt;&lt;/a&gt;&lt;em&gt;Figure 1 — Kubernetes Services | K8S Services Fundamentals&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As briefly stated above, the second challenge is balancing the traffic between redundant pods. In Kubernetes, having multiple pods running the same application is common. This is commonly done to ensure high availability of a service. In other words, depending on the application’s needs, you may choose to run the application on ten different pods. In this case, you’ll have a load-balancing service that will split the traffic between those pods as you specify. The simplest way is to send each request to a newly available pod. However, that’s not the extent of a load balancer. In many scenarios, it’s important to maintain the connection of a client or service to the same endpoint; the load balancer will thus ensure that the traffic from the same source goes to the correct pod. Furthermore, a load balancer will also maintain the health status of the underlying pods and allocate traffic accordingly; if a pod is terminated, the load balancer will redirect traffic accordingly. Note that these concepts are similar to the load balancers we’ve covered on the AWS Side! — Ex: &lt;a href="https://www.kerno.io/learn/how-to-deploy-an-application-load-balancer-alb"&gt;AWS Application Load Balancer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foso94o00ib3o44249nz0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foso94o00ib3o44249nz0.gif" alt="Figure 2 — Kubernetes Services | K8S Services Load Balancer Concepts" width="1354" height="768"&gt;&lt;/a&gt;&lt;em&gt;Figure 2 — Kubernetes Services | K8S Services Load Balancer Concepts&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Kubernetes Services — ClusterIP
&lt;/h2&gt;

&lt;p&gt;When you don’t specify a service within your YAML file in Kubernetes, ClusterIP will be the default one to be used. Here’s a walkthrough of how the ClusterIP service works:&lt;br&gt;
You’re deploying two pods within a node — one pod is running an application that has the port 4000 open, while the other port 8000. You’re going to need a second node with a replica of those pods. Note that the second node is going to be assigned a different IP address as we had previously discussed.&lt;br&gt;
On a side note, you can find the IP address of your pods by using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pod &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re looking to capture all namespaces, you can add the following modifier to the command above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pod &lt;span class="nt"&gt;-o&lt;/span&gt; wide –all-namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dv2kjzzurlxlxp5msoe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dv2kjzzurlxlxp5msoe.png" alt="Figure 3 — Kubernetes Services | ClusterIP pod IP addresses" width="800" height="214"&gt;&lt;/a&gt;&lt;em&gt;Figure 3 — Kubernetes Services | ClusterIP pod IP addresses&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Getting back to ClusterIP… Our microservice is deployed across two nodes with each node containing 4 pods that expose 2 ports — 4000 and 8000. When a client sends a request to the K8S cluster, it’s handled by what’s called an Ingress service. This service will forward the request to the appropriate service that will then distribute the packet to the node / pod. Here’s a simplified diagram of how this is handled in this particular example:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10xnt94ugefs5qgyhglf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10xnt94ugefs5qgyhglf.gif" alt="Figure 4 — Kubernetes Services | ClusterIP K8S Service" width="800" height="453"&gt;&lt;/a&gt;&lt;em&gt;Figure 4 — Kubernetes Services | ClusterIP K8S Service&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’s important to note that a service is not a pod; it’s not something you need to run an application on to handle the requests between Ingress and the nodes. Kubernetes has pre-built services, including ClusterIP, which handles the requests out of the box as outlined above. Note that you’ll specify the services that need to be accessed via the YAML file configuration by giving them a unique name and port.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kernoApp-prod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kernoApp&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kernoApp&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kernoApp-container&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kernoApp-image&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How Does the ClusterIP Kubernetes Service Know Where to Forward the Request?
&lt;/h3&gt;

&lt;p&gt;You’ll notice in the figure above a set of key-value pairs specified for our application and pods. The specific line we’re looking for falls under the “selector” specifier. In this case, we’ve set the “app” value to “kernoApp.” On the service side, we’ll need to specify a matching configuration key-value pair that will be used to identify to which nodes and pods this service is tied to. In other words, in this example, we’ll need to create a service that would call out the same key-value pair — “app: kernoApp.” As mentioned above, if you don’t specify a service for your application, the Kubernetes engine will create a default one.&lt;/p&gt;

&lt;p&gt;The second, and last item when it comes to forwarding a message is the port number. When the request comes into the ClusterIP Service, a request port is specified. It will be matched based on the set of pods connected to the service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Nodes &amp;amp; the ClusterIP Kubernetes Service
&lt;/h3&gt;

&lt;p&gt;Kubernetes clusters can become quite complex with a variety of services. Data storage is a service that is required in most modern software applications. When it comes to ClusterIP, the same idea applies to a database. For example, your node / pod may need to access a database to store or retrieve information. In this case, a ClusterIP service will be created for the node(s) in front of the MongoDB service. When a request to store or retrieve data is made via the unique key-value pairs of the database service, the ClusterIP service will direct the traffic accordingly. It’s important to note that databases operate differently than applications — there’s an extra layer of complexity that ensures the read and write operations don’t happen concurrently, the data is stored in a single instance, the backup services create images that are reliable and accessible, etc. In other words, the image below is an oversimplification of what happens to the data versus an application.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7kd677n1sws6om8m9pn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7kd677n1sws6om8m9pn.gif" alt="Figure 5 — Kubernetes Services | ClusterIP MongoDB Interaction&amp;lt;br&amp;gt;
ClusterIP MultiPort Services" width="1354" height="768"&gt;&lt;/a&gt;&lt;em&gt;Figure 5 — Kubernetes Services | ClusterIP MongoDB Interaction&lt;br&gt;
ClusterIP MultiPort Services&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We’ve discussed an example in which ClusterIP receives a single stream of requests that is distributed across multiple nodes and pods. The MultiPort service will intake different streams of data that specify the port. Let’s take a look at an example!&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqqukhnm8qqova7uk5g4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqqukhnm8qqova7uk5g4.gif" alt="Figure 6 — Kubernetes Services | ClusterIP MultiPort Services" width="1353" height="765"&gt;&lt;/a&gt;&lt;em&gt;Figure 6 — Kubernetes Services | ClusterIP MultiPort Services&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Notice that a different service sends information to the MongoDB service nodes. Although we’ve illustrated a single pod within each node, an example of a service that would access these nodes would be a data transformation application or a metrics service.&lt;br&gt;
The important point here is that you’ll need to specify two ports on the ClusterIP service, hence the name MultiPort. Each port will service requests coming into that specific address — the service will redirect the packets accordingly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Services — Headless Services
&lt;/h2&gt;

&lt;p&gt;A ClusterIP service can be considered the “head” of a cluster of nodes. A headless service is thus one where the request is made directly to a specific pod.&lt;br&gt;
Why Would an Application Need to Communicate with a Specific Pod?&lt;br&gt;
We’ve used a database service as an example above, explaining that there’s “more complexity than discussed here.” The reality of databases is that their clusters of services are structured differently. The basic premise is that a database is going to have a master replica into which the user (applications) can write into. For availability purposes, other database application nodes will be “copies” that can’t be written to but can be read. This is done to provide access to the database without compromising the integrity of the data within. This schema allows applications to access a single pod that can be written to and ignores the others. Internally, the master node will propagate copies of the database to the worker nodes from which other services and applications can read the data.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw11qr8ebmf8nsk1ginp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw11qr8ebmf8nsk1ginp.gif" alt="Figure 7 — Kubernetes Services | Headless Services&amp;lt;br&amp;gt;
Kubernetes Services — NodePort" width="800" height="453"&gt;&lt;/a&gt;&lt;em&gt;Figure 7 — Kubernetes Services | Headless Services&lt;br&gt;
Kubernetes Services — NodePort&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’ll often see a specification of a nodePort within a &lt;a href="https://www.kerno.io/learn/yaml-file-format-complete-guide"&gt;YAML file&lt;/a&gt; defined for Kubernetes applications. The NodePort Service is a wrapper that opens up a port to the node. It’s important to note that this value must fall between 30000 and 32767; any value specified for NodePort outside of that range won’t be accepted.&lt;br&gt;
The NodePort service will span across nodes and thus open them up to access via a specified IP and node configuration. It’s important to note that this isn’t the most secure way of specifying your infrastructure, as you’re opening up direct ports to talk to the nodes of the underlying applications / services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Services — LoadBalancer
&lt;/h2&gt;

&lt;p&gt;Every cloud provider (e.g., AWS, GCP, Azure, etc.) has its own load balancer service. Depending on which provider you’re deploying your K8S application, the LoadBalancer Service will access the native implementation of that cloud service. In other words, the K8S engine will utilize the APIs of the provider’s Load Balancer upon which it is deployed.&lt;br&gt;
The LoadBalancer service is an extension of the NodePort service which is an extension of the ClusterIP service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion on Kubernetes Services
&lt;/h2&gt;

&lt;p&gt;There are four types of Kubernetes Services you should be familiar with — ClusterIP, Headless, NodePort, and LoadBalancer. They aren’t necessarily disjointed services; you’ll likely have a load balancer functionality in every application. The headless service is a specific type of service that is used in edge cases, such as databases where you need to access a single pod within a cluster. The last note is that you shouldn’t use NodePort outside of basic testing scenarios, as it exposes your application to vulnerabilities.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>beginners</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
