<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yash Kumar Shah</title>
    <description>The latest articles on DEV Community by Yash Kumar Shah (@yashdevops).</description>
    <link>https://dev.to/yashdevops</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yashdevops"/>
    <language>en</language>
    <item>
      <title>Building Multi-Arch Images Inside Kubernetes Using BuildKit</title>
      <dc:creator>Yash Kumar Shah</dc:creator>
      <pubDate>Wed, 10 Jan 2024 12:09:27 +0000</pubDate>
      <link>https://dev.to/yashdevops/building-multi-arch-images-inside-kubernetes-using-buildkit-4gne</link>
      <guid>https://dev.to/yashdevops/building-multi-arch-images-inside-kubernetes-using-buildkit-4gne</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Problem Statement&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A SaaS company, when rolling out applications in a customer's setup, often deals with a mix of different architectures. To ensure we can support all these variations, we create Docker images in a multi-architecture format. In the current landscape, many companies leverage CI/CD in Kubernetes and utilize tools like Kaniko for their Docker-building processes. It is important to note that Kaniko ( on the day of writing ) itself lacks support for the multi-architecture build of Docker images.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisite&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before implementing multi-architecture builds using Kubernetes, it's essential to have a Kubernetes cluster with Jenkins Setup and helm installed in your local to set up the buildkit helm chart. At Aurva, we streamline our Jenkins configuration using a shared library, coupled with Jenkins Configuration as Code (JCasC), to set up and manage Jenkins jobs efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction to BuildKit&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Introducing BuildKit, a solution for optimizing container image construction. This tool enhances efficiency, security, and speed in the image-building process and is seamlessly integrated into Docker's latest release (v18.06). As an integral component of the Moby project.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Setting Up BuildKit&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Assuming you already have a Jenkins setup in a Kubernetes namespace named Jenkins the next step is to install the BuildKitservice within that namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1: Helm Chart Magic&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Start by deploying the BuildKit service using Helm. Execute the following command in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;helm repo add andrcuns https://andrcuns.github.io/charts&lt;br&gt;
helm install buildkit andrcuns/buildkit-service --namespace jenkins&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Setting Up Docker Agent
&lt;/h2&gt;

&lt;p&gt;Now, let's take the next stride by tweaking your Kubernetes agent pod in your Jenkins Groovy script. Use the below Dockerfile to create your docker image to be used for building agents&lt;/p&gt;

&lt;p&gt;Dockerfile Example&lt;/p&gt;

&lt;p&gt;`FROM --platform=${BUILDPLATFORM} docker:rc-dind-rootless as build&lt;br&gt;
ARG TARGETARCH&lt;br&gt;
WORKDIR /home/rootless&lt;br&gt;
RUN wget -O docker-credential-ecr-login &lt;a href="https://amazon-ecr-credential-helper-releases.s3.us-east-2.amazonaws.com/0.7.1/linux-$%7BTARGETARCH%7D/docker-credential-ecr-login"&gt;https://amazon-ecr-credential-helper-releases.s3.us-east-2.amazonaws.com/0.7.1/linux-${TARGETARCH}/docker-credential-ecr-login&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FROM --platform=${TARGETPLATFORM} docker:rc-dind-rootless&lt;br&gt;
COPY --from=build /home/rootless/docker-credential-ecr-login /home/rootless/.local/bin/docker-credential-ecr-login&lt;br&gt;
RUN chmod +x /home/rootless/.local/bin/docker-credential-ecr-login&lt;br&gt;
ENV PATH="/home/rootless/.local/bin:${PATH}"&lt;br&gt;
USER rootless`&lt;/p&gt;

&lt;p&gt;Jenkins Agent.yaml&lt;br&gt;
`&lt;br&gt;
def buildkit_agent() {&lt;br&gt;
  template = """&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: Pod&lt;br&gt;
metadata:&lt;br&gt;
    name: buildkit-agent-pod&lt;br&gt;
    namespace: build&lt;br&gt;
spec:&lt;br&gt;
  serviceAccount: "aurva-jenkins-worker-sa"&lt;br&gt;
  containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: buildkit-agent
image: docker:rc-dind-rootless
command:

&lt;ul&gt;
&lt;li&gt;/usr/bin/bash
tty: true&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"""&lt;br&gt;
  template&lt;br&gt;
}`&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: "In this blog, we won't delve into an ECR repo configuration. However, if you're keen on setting up an ECR repo, you can leverage AWS ECR Credential Helper to streamline pushing images to ECR."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 3: Create a Remote Driver &amp;amp; Execute the Docker Build
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Remote Driver Setup:
&lt;/h2&gt;

&lt;p&gt;BuildKit requires the creation of a remote driver that points to the BuildKit service. This command can be executed from the Docker docker-agent shell, and running the command will create a remote driver.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker buildx create --use --driver=remote tcp://buildkit-buildkit-service.jenkins:1234&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The above command will create a remote Docker drive and point to the BuildKit using a BuildKit service deployed in the Jenkins namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Build Docker Images:
&lt;/h2&gt;

&lt;p&gt;It's time for the final step, run the following command to build Docker images and push them to your Amazon ECR repository.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker buildx build - push -f&lt;/code&gt;pwd&lt;code&gt;/${dockerFilePath} -t ${imageFullName} - platform linux/arm64/v8,linux/amd64&lt;/code&gt;pwd`/${buildContextPath}&lt;/p&gt;

&lt;h1&gt;
  
  
  Output will be
&lt;/h1&gt;

&lt;h1&gt;
  
  
  docker buildx build --push -f /home/jenkins/agent/workspace/aurva-gateway/docker/dockerfiles/gateway.Dockerfile --build-arg ENV_TAG= -t .dkr.ecr.ap-south-1.amazonaws.com/: --platform linux/arm64/v8,linux/amd64 /home/jenkins/agent/workspace/aurva-gateway/`
&lt;/h1&gt;

&lt;p&gt;In the aforementioned docker buildx command, there is a 'platform' parameter where you can define all the platforms for which you want to build the image&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In a nutshell, this guide explores implementing multi-architecture Docker builds in Kubernetes using BuildKit. We addressed the challenge of a diverse docker arch, introduced BuildKit as a seamless solution integrated into Docker, and outlined key steps for setup. By adopting these practices, you'll boost your CI/CD pipelines for flexible and efficient deployment across various architectures, staying ahead in containerized application development.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Anycast Service Global Accelerator</title>
      <dc:creator>Yash Kumar Shah</dc:creator>
      <pubDate>Sat, 16 May 2020 14:34:58 +0000</pubDate>
      <link>https://dev.to/yashdevops/aws-anycast-service-global-accelerator-2cp2</link>
      <guid>https://dev.to/yashdevops/aws-anycast-service-global-accelerator-2cp2</guid>
      <description>&lt;p&gt;Yash Shah&lt;/p&gt;

&lt;h5&gt;
  
  
  Problem Statement
&lt;/h5&gt;

&lt;p&gt;Recently We Guys were facing issues with whitelisting our application to third parties behind an ALB. As ALB IP gets changes frequently. Came up on a solution using NLB and assign Elastic IP. But Still Many features of ALB we were missing. &lt;/p&gt;

&lt;p&gt;We started looking for a solution and came up with the latest AWS service Global Accelerator. The following service not only helps what we need but also baked many use cases which are very fascinating.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Setting up Regional DR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easily move endpoints between Availability Zones or AWS Regions without needing to update your DNS configuration or change client-facing applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Control traffic up or down for a specific AWS Region&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  About AWS Global Accelerator?
&lt;/h4&gt;

&lt;p&gt;Global Accelerator became publicly available in late 2018&lt;/p&gt;

&lt;p&gt;The main use case for users is the ability to get a static IPv4 address that isn’t tied to a region&lt;/p&gt;

&lt;p&gt;With global accelerator, customers get two globally anycast IPv4 addresses that can be used to load balance across 14 unique AWS regions &lt;/p&gt;

&lt;p&gt;A user request will get routed to the closest AWS edge POP based on BGP routing. From there, you can load balance requests to the AWS regions where your applications are deployed&lt;/p&gt;

&lt;p&gt;Global accelerator comes with traffic dials that allow you to control how much traffic goes to what region, as well as instances in that region. It also has built-in health checking to make sure traffic is only routed to healthy instances.&lt;/p&gt;

&lt;p&gt;AWS Global Accelerator always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, your user’s location, and policies that you configure&lt;/p&gt;

&lt;p&gt;Some primitive in Global Accelerator&lt;/p&gt;

&lt;p&gt;Global Accelerator, think of this of your anycast endpoint. It’s a global primitive and comes with two IPv4 addresses.&lt;/p&gt;

&lt;p&gt;Listener which defines what port and protocol (TCP or UDP) to listen on. A Global Accelerator can have multiple listeners&lt;/p&gt;

&lt;p&gt;For each Listener, you then create one or more Endpoint Groups. An endpoint group allows you to group endpoints together by region. For example a region ap-south-1 can be an endpoint group.You can even control the percentage of traffic that you want to send to that region.&lt;/p&gt;

&lt;p&gt;For every Endpoint group we have multiple Endpoint, this can be either an Elastic IP address, a Network Load Balancer, or an Application Load Balancer. For each endpoint, you can configure a weight that controls how traffic is load-balanced over the various endpoints within an endpoint group.&lt;/p&gt;

&lt;p&gt;Hand's On&lt;/p&gt;

&lt;p&gt;Prerequisite:&lt;/p&gt;

&lt;p&gt;Setup two small app behind alb in different region. So that we can connect to the Endpoint in Global Accelerator.&lt;/p&gt;

&lt;p&gt;Whenever you open Global Accelerator you will be redirected to Oregon. &lt;/p&gt;

&lt;p&gt;Step:- &lt;br&gt;
The complete global Accelerator creation is a 4 step process that we will be going through step by step.&lt;/p&gt;

&lt;p&gt;Step 1:&lt;/p&gt;

&lt;p&gt;Create an Accelerator. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xaCeAWpP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kl0l25pn846ngjq7rwc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xaCeAWpP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kl0l25pn846ngjq7rwc1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provide the accelerator name. If you go Bring your own IP addresses (BYOIP) then you can choose the IPv4 option in the accelerator. Otherwise, as will assign two static IPs from amazon's pool of IP Addresses. Finally tag your accelerator with required key-name accordingly.&lt;/p&gt;

&lt;p&gt;Step 2: &lt;/p&gt;

&lt;p&gt;Add Listener to the accelerator. Specify a port, port range, or multiple port ranges that you want the listener to listen on. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BYlmJyz1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pzci7rld7iuazn6qneby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BYlmJyz1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pzci7rld7iuazn6qneby.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note :- If you have stateful applications, Global Accelerator can direct all requests from a user at a specific client IP address to the same endpoint resource, to maintain client affinity.&lt;/p&gt;

&lt;p&gt;By default, client affinity is None and Global Accelerator distributes traffic equally between the endpoints in the endpoint groups for the listener. &lt;/p&gt;

&lt;p&gt;Step 3&lt;/p&gt;

&lt;p&gt;Add an endpoint group for each AWS Region that you want to direct traffic&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V92Qe6xy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lwkik40xrmtozz6drvf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V92Qe6xy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lwkik40xrmtozz6drvf3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An endpoint group is associated with a specific AWS Region. Endpoint groups include one or more endpoints in the Region. &lt;/p&gt;

&lt;p&gt;For each AWS Region that you want to direct traffic to, add one endpoint group. You can't have more than one endpoint group per Region. &lt;/p&gt;

&lt;p&gt;You can increase or reduce the percentage of traffic that would be otherwise directed to an endpoint group by adjusting a setting called a traffic dial.&lt;/p&gt;

&lt;p&gt;Step 4&lt;/p&gt;

&lt;p&gt;Last Step is the creation of the endpoint.&lt;/p&gt;

&lt;p&gt;Note :- If you are going for EC2 Instance then you have to specify the health check for your application. If you are going for LoadBalancer's then it will use the health check used by loadbalancers for traffic sending.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vDAvULU4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/aaq8y3wv8bmdtyhp202f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vDAvULU4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/aaq8y3wv8bmdtyhp202f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Time to start testing.&lt;/p&gt;

&lt;p&gt;Now that we have our anycast global load balancer up and running, it’s time to start testing. To check if load balancing works as expected, I’m using to following test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in {1..100}; do curl -s -q http://13.248.138.197 ;done | sort -n | uniq -c | sort -rn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Global Accelerator Routing vs Normal Routing&lt;/p&gt;

&lt;p&gt;An interesting difference between Global Accelerator and regular public ec2 IP addresses is how traffic is routed to AWS. For Global Accelerator, AWS will try and get traffic on its own network as soon as possible. As compared to the scenario with regular AWS public IP addresses; Amazon only announces those prefixes out of the region where the IP addresses are used. That means that for Global Accelerator IP addresses your traffic is handed off to AWS at its closest Global Accelerator POP and then uses the AWS backbone to get to the origin. For regular AWS public IP addresses, AWS relies on public transit and peering to get the traffic to the region it needs to get to&lt;/p&gt;

&lt;p&gt;Let’s look at a traceroute to illustrate this. One of my origin servers in the ap-south-1 regions is 13.232.169.234, a traceroute to that ec2 instance in Mumbai from my local machine&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PCMPhZOG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/smlsawf2yxhv05uogpge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PCMPhZOG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/smlsawf2yxhv05uogpge.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case the handoff is from static-delhi.vsnl.net.in to static-mumbai.vsnl.net.in via public routing. &lt;/p&gt;

&lt;p&gt;Now, as a comparison, we’ll look at a traceroute from the same server in Mumbai to the anycast Global Accelerator IP&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1pppTOYL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9xuh8w4vzqdjart972du.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1pppTOYL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9xuh8w4vzqdjart972du.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Global Accelerator only relies on lesser hops than the previous one. In this case, AWS is announcing the prefix locally via the Internet Exchange and it is handed off to AWS on the second hop. Quite a latency difference.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;I think Global Accelerator is a powerful service. Having the ability to flexibly steer traffic to particular regions and even endpoints gives you a lot of control, which will be useful for high traffic applications. It also makes it easier to make applications highly available even on a per IP address basis (as compared to using DNS based load-balancing). A potentially useful feature for Global Accelerator would be to combine it with the Bring Your Own IP feature.&lt;/p&gt;

&lt;p&gt;I personally used the Global Accelerator to provide 2 static IP to my ALB for third party whitelisting.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Centralized Monitoring Stack Setup Dilemma</title>
      <dc:creator>Yash Kumar Shah</dc:creator>
      <pubDate>Mon, 10 Feb 2020 12:55:14 +0000</pubDate>
      <link>https://dev.to/yashdevops/centralized-monitoring-stack-setup-dilemma-hgn</link>
      <guid>https://dev.to/yashdevops/centralized-monitoring-stack-setup-dilemma-hgn</guid>
      <description>&lt;p&gt;Yash Shah&lt;/p&gt;

&lt;p&gt;When I started setting up centralized monitoring for an organization level. I came across a few of the stacks that attracted me. This post is regarding those who want to start setting up their monitoring can read and get a clear view of what things are indifferent stack and where to approach to set up a perfect "chowkidar 😊😊😊😊" to send anomalies happening over the entire Infra&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tick Stack&lt;/li&gt;
&lt;li&gt;Prometheus and Grafana&lt;/li&gt;
&lt;li&gt;Sensu&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tick Stack
&lt;/h2&gt;

&lt;p&gt;T (Telegraf): Telegraph is an open-source agent that help to collect metrics from server sensor and systems.&lt;/p&gt;

&lt;p&gt;I (Influx): Influx DB is a  time-series database designed to handle high write and query loads.&lt;/p&gt;

&lt;p&gt;C (Chronograf): Chronograf is the user interface and administrative component of the InfluxDB 1.x platform.&lt;/p&gt;

&lt;p&gt;K (Kapacitor): Kapacitor is a native data processing engine for InfluxDB 1.x and is an integrated component in the InfluxDB 2.0 platform.&lt;br&gt;
Kapacitor can process both stream and batch data from InfluxDB, acting on this data in real-time via its programming language TICKscript.&lt;/p&gt;

&lt;p&gt;Tick uses a more traditional method where agent connects to a central monitoring system, Here the agent (Telegraf) which is a pluggable piece of software and supports multiple inputs and output plugins for specific infrastructure monitoring. Its plugins also allow it to connect to different communication methods like statsD, Nagios Plugins and two-way integration with Prometheus.&lt;/p&gt;

&lt;p&gt;TICK is free, easy to install, based on one server/DB combo as the monitoring engine and since it is backed by a company, it also provides Enterprise support and database clustering to those who don’t care to shell out some cache for a complete solution&lt;/p&gt;

&lt;p&gt;TICK is both easy to deploy and based on an official DB, it is free an OpenSource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---PV5nVJE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.influxdata.com/wp-content/uploads/InfluxDB_Diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---PV5nVJE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.influxdata.com/wp-content/uploads/InfluxDB_Diagram.png" alt="Tick Stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus
&lt;/h2&gt;

&lt;p&gt;Prometheus is based on pull-based metrics it means the Prometheus server approaches the open port in agent server for the metrics rather than the agent connecting to the Prometheus server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ElLZnIOD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prometheus.io/assets/architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ElLZnIOD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://prometheus.io/assets/architecture.png" alt="Prometheus Version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Prometheus can also use Tick Stack agent telegraf as an exporter.
&lt;/h5&gt;

&lt;p&gt;Prometheus is easy to install, using one server as the central monitoring system and storage, it has a built-in DB created just for saving monitoring data&lt;/p&gt;

&lt;p&gt;For high availability the instruction is to use two different servers, both monitoring the same exporters. Here is where the pull method comes in handy, since the exporters don’t know of the server’s address, any number of servers can connect to them and pull the data. The alerting component has the ability to de-duplicate alert when connected to two servers.&lt;/p&gt;

&lt;p&gt;Prometheus is also 100% free which makes it the easiest &amp;amp; cheapest HA monitoring solution in the market.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sensu
&lt;/h3&gt;

&lt;p&gt;Sensu has been around since the past 2011 and has a noticeable market share of around 7%, its architecture is geared towards massive amounts of data, so it uses RabbitMQ to pipe and buffer the monitoring information between its collectors and it’s the main server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RD1RlwoN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://docs.sensu.io/images/sensu-diagram.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RD1RlwoN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://docs.sensu.io/images/sensu-diagram.gif" alt="Sensu"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sensu has its own collection plugins and supports Nagios plugins as well.&lt;br&gt;
The Sensu servers are built for High Availability out of the box, but they don’t include a DB for storing the data, it uses Redis by default but for storing more than several hours, you will need to include either InfluxDB or ElasticSearch in your installation, both of which will require an enterprise license if you want enterprise features (InfluxDB charges for clustering, ElasticSearch charges for security)&lt;/p&gt;

&lt;p&gt;In Sensu alerts are called checks, and creating one involves creating a cron job and configuring a JSON file which tells it what ruby script to run, what statistic to check (including some calculations) and whom to notify. It doesn’t seem as straight forward as the other solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HIfxAxdA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qtzfroytn341ze5c528o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HIfxAxdA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qtzfroytn341ze5c528o.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a free solution i would definitely go with prometheus, if DB clustering is a specific requirement TICK is your solution, both of them are easy to install with a helm script and some tinkering. for creating nice dashboards, there is no dilemma Grafana is the absolute best and while TICK still develops it’s own UI (Chronograf), all of them integrate with it&lt;/p&gt;

</description>
      <category>devops</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>Puzzled Around ARG ENV and .env in Dockerfile</title>
      <dc:creator>Yash Kumar Shah</dc:creator>
      <pubDate>Sun, 19 Jan 2020 07:58:39 +0000</pubDate>
      <link>https://dev.to/yashdevops/puzzled-around-arg-env-and-env-37o5</link>
      <guid>https://dev.to/yashdevops/puzzled-around-arg-env-and-env-37o5</guid>
      <description>&lt;p&gt;Yash Shah&lt;/p&gt;

&lt;h5&gt;
  
  
  Prerequisite :- Basic understanding of Dockerfile
&lt;/h5&gt;

&lt;p&gt;Recently I came to a use case where i had to build image on the basis of params which depend on the environment i am currently building my container. Hence i use ARG to pass Arguments while building the container. And setting up ENV on the basis of ARG that i have provided.&lt;/p&gt;

&lt;h3&gt;
  
  
  The .env file
&lt;/h3&gt;

&lt;p&gt;It works with docker-compose.yml. You can simple use $ notation in your compose file to pick value from .env, if both docker-compose.yml and .env is in same directory&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;.env&lt;/strong&gt; looks something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;first_variable=a
second_variable=b
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The key-value pair is used in docker-compose.yml by simply using $ notation within the docker-compose.yml file. Lets look and example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  demo:
    image: linuxserver/demo
      environment:
        - env_var_name=${first_variable}    # here it is


&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h6&gt;
  
  
  1. To check thing are working fine in docker-compose.yml you can simply use docker-compose config, This way you can check how your docker-compose.yml will look after substituting values from .env
&lt;/h6&gt;

&lt;h6&gt;
  
  
  2. There is a glitch the environment variable in host machine can override the variable in your .env file
&lt;/h6&gt;

&lt;h3&gt;
  
  
  Different types of variable in Dockerfile
&lt;/h3&gt;

&lt;p&gt;ARG : ARG also known as build time variable, there availability is available the moment they are announced in the dockerfile till the docker image is build. Somewhat like CMD and ENTRYPOINT which tells the container what to run by default. If you tell a Dockerfile to expect various ARG variables (without a default value) but none are provided when running the build command, there will be an error message&lt;/p&gt;

&lt;p&gt;ENV: ENV are available during the run time, you can also override them during run time.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to use ARG variables
&lt;/h3&gt;

&lt;p&gt;So if you have a Dockerfile and if you want to set them? You can set them blank or leave a default value and you can override them during the docker build.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu:18.04
ARG any_default_variable
# or with a hard-coded default:
#ARG any_default_variable=any_default_value

RUN echo "Hi $any_default_variable"
# you could also use braces - ${any_default_variable}


&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can use the docker build command to send the build args&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t demo --build-arg any_default_variable=yash  .

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above command will produce the following output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM ubuntu:18.04
 ---&amp;gt; ccc6e87d482b
Step 2/3 : ARG any_default_variable
 ---&amp;gt; Running in ec62ad011b88
Removing intermediate container ec62ad011b88
 ---&amp;gt; 3d8df126a04c
Step 3/3 : RUN echo "Hi $any_default_variable"
 ---&amp;gt; Running in e9f5fb20897e
Hi yash
Removing intermediate container e9f5fb20897e
 ---&amp;gt; d6fe8d6e0f9a
Successfully built d6fe8d6e0f9a
Successfully tagged demo:latest

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  How to use ENV variables
&lt;/h3&gt;

&lt;p&gt;ENV variables are also available during the build, as soon as you introduce them with an ENV instruction. However, unlike ARG, they are also accessible by containers started from the final image. ENV values can be overridden when starting a container&lt;/p&gt;

&lt;h4&gt;
  
  
  Setting up ENV variables
&lt;/h4&gt;

&lt;p&gt;You can do it when starting your containers, but you can also provide default ENV values directly in your Dockerfile by hard-coding them. Also, you can set dynamic default values for environment variables!&lt;/p&gt;

&lt;p&gt;When building an image, the only thing you can provide are ARG values, as described above. You can’t provide values for ENV variables directly. However, both ARG and ENV can work together. You can use ARG to set the default values of ENV vars. Here is a basic Dockerfile, using hard-coded default values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# no default value
ENV hey
# a default value
ENV foo /bar
# or ENV foo=/bar

# ENV values can be used during the build
ADD . $foo
# or ADD . ${foo}
# translates to: ADD . /bar

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  And here is a snippet for a Dockerfile, using dynamic on-build env values:
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# expect a build-time variable
ARG A_VARIABLE
# use the value to set the ENV var default
ENV an_env_var=$A_VARIABLE
# if not overridden, that value of an_env_var will be available to your containers!

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once the image is built, you can launch containers and provide values for ENV variables in three different ways, either from the command line or using a docker-compose.yml file. All of those will override any default ENV values in the Dockerfile. Unlike ARG, you can pass all kinds of environment variables to the container. Even ones not explicitly defined in the Dockerfile. It depends on your application whether that’ll do anything however.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Provide values one by one
&lt;/h4&gt;

&lt;p&gt;From the commandline, use the -e flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -e "env_var_name=another_value" alpine env

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Pass environment variable values from your host
&lt;/h4&gt;

&lt;p&gt;It’s the same as the above method. The only difference is, you don’t provide a value, but just name the variable. This will make Docker access the current value in the host environment and pass it on to the container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -e env_var_name alpine env

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Take values from a file (env_file)
&lt;/h4&gt;

&lt;p&gt;The file is called env_file_name (name arbitrary) and it’s located in the current directory. You can reference the filename, which is parsed to extract the environment variables to set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --env-file=env_file_name alpine env

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;if you’re new to Docker and not used to think about images and containers: if you try to set the value of an environment variable from inside a RUN statement like RUN export VARI=5 &amp;amp;&amp;amp; ..., you won’t have access to it in any of the next RUN statements. The reason for this, is that for each RUN statement, a new container is launched from an intermediate image. An image is saved by the end of the command, but environment variables do not persist that way.&lt;/p&gt;

&lt;p&gt;If you’re curious about an image, and would like to know if it provides default ENV variable values before the container is started, you can inspect images, and see which ENV entries are set by default:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# first, get the images on your system and their ids
$ docker images
# use one of those ids to take a closer look
$ docker inspect image-id

# look out for the "Env" entries

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;By now, you should have a really good overview build-time arguments, environment variables, env_files and docker-compose templating with .env files. I hope you got a lot of value out of it, and can use the knowledge to save yourself lots of bugs in the future. You have to see them in action and apply them to your own work to truly make them part of your tool belt. The best way to make sure you will be able to make use of this information, is to learn by doing – go ahead and try some of those techniques&lt;/p&gt;

</description>
      <category>docker</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
