<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nancy Chauhan</title>
    <description>The latest articles on DEV Community by Nancy Chauhan (@_nancychauhan).</description>
    <link>https://dev.to/_nancychauhan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_nancychauhan"/>
    <language>en</language>
    <item>
      <title>Load Balancing</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Tue, 28 Sep 2021 05:58:15 +0000</pubDate>
      <link>https://dev.to/ladiesindevops/load-balancing-461l</link>
      <guid>https://dev.to/ladiesindevops/load-balancing-461l</guid>
      <description>&lt;p&gt;We encounter load balancers every day. Even when you are reading this article, your requests flow through multiple load balancers, before this content reaches your browser.&lt;/p&gt;

&lt;p&gt;Load balancing is one of the most important and basic concepts we encounter every single day. It is the process of distributing incoming requests across multiple servers/processes/machines at the backend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do we need load balancing?
&lt;/h2&gt;

&lt;p&gt;Usually, when we make an application, clients will route their request to one of the backend servers, but soon as traffic grows, that server will reach its limits. To overcome this, we can spin up another server to share the traffic. But how to let the clients know to connect to the new machine?&lt;br&gt;
Load balancing is the technique used for discovery and decision-making for this routing. There are two ways of achieving this — server-side load balancing or client-side load balancing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ds-PEW5c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/4800/1%2AYxgXygvKUmCpYjKfXEeCzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ds-PEW5c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/4800/1%2AYxgXygvKUmCpYjKfXEeCzw.png"&gt;&lt;/a&gt;&lt;br&gt;Single application server gets overloaded with request
  &lt;/p&gt;

&lt;h1&gt;
  
  
  Server-side Load Balancing:
&lt;/h1&gt;

&lt;p&gt;There is a middle layer, a load balancer that forwards the incoming requests to different servers to remove that complexity. All backend servers get registered with a load balancer which then routes to one of the server instances using various algorithms. AWS ELB, Nginx, Envoy are some examples of server-side load balancers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rxAx41Pt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AM013EIjXPW81qIWWKbwU_w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rxAx41Pt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AM013EIjXPW81qIWWKbwU_w.png"&gt;&lt;/a&gt;&lt;br&gt;Server-side load balancing
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No need for client-side changes.&lt;/li&gt;
&lt;li&gt;Easy to make changes to load balancing algorithms and backend servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Client-Side Load Balancing:
&lt;/h1&gt;

&lt;p&gt;In client-side load balancing, the client handles the load balancing. Let’s take an abstract look at how this can be achieved. To perform load balancing on the client-side -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client should be aware of all available web servers&lt;/li&gt;
&lt;li&gt;A library on the client-side to implement a load balancing algorithm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The client routes the requests to one of the servers using client-side load balancing libraries like Ribbon. Client-side load balancing is also used for service discovery.&lt;/p&gt;

&lt;p&gt;Suppose Service A (client-side) wants to access Service B (server-side). Service B has three instances and register all at the discovery server (X). Service A has enabled the Ribbon client which allows doing the client-side load balancing. It fetches the available Service B instances from the discovery server and redirects the traffic from the client-side and constantly listens for any changes.&lt;/p&gt;

&lt;p&gt;Here I have implemented client-side load balancing using consul service discovery: &lt;a href="https://github.com/Nancy-Chauhan/consul-service-discovery"&gt;https://github.com/Nancy-Chauhan/consul-service-discovery&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZQZpHQ6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2ACtMtKBTIpfiKTNdRHD-ccQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZQZpHQ6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2ACtMtKBTIpfiKTNdRHD-ccQ.png"&gt;&lt;/a&gt;&lt;br&gt;Server-side load balancing
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No need for additional infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Benefits of Loadbalancing
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yBZvsiGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AVtCPP8DOJX7XUhwr4Gp7jg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yBZvsiGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AVtCPP8DOJX7XUhwr4Gp7jg.png"&gt;&lt;/a&gt;&lt;br&gt;Reference: &lt;a href="https://www.nginx.com/resources/glossary/load-balancing/"&gt;https://www.nginx.com/resources/glossary/load-balancing/&lt;/a&gt;
  &lt;/p&gt;

&lt;p&gt;Load balancers are the foundation of modern cloud-native applications. The concept of load balancing and the ability to be dynamically configured has created innovations such as service mesh.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Enforcing Coding Best Practices using CI</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Sun, 30 May 2021 11:37:02 +0000</pubDate>
      <link>https://dev.to/ladiesindevops/enforcing-coding-best-practices-using-ci-44n5</link>
      <guid>https://dev.to/ladiesindevops/enforcing-coding-best-practices-using-ci-44n5</guid>
      <description>&lt;p&gt;High-performing teams usually ship faster, better, and often! Organizations irresepctive of their level, focusing on stability and continuous delivery, will deploy frequently. Hundreds of continuous integration build run for every organization on a typical day. It indicates how CI has become an integral part of our development process. Hence to ensure that we are shipping quality code, we should integrate code quality checking in our CI. &lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1368692809436303360-796" src="https://platform.twitter.com/embed/Tweet.html?id=1368692809436303360"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1368692809436303360-796');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1368692809436303360&amp;amp;theme=dark"
  }



 &lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1281102326929874944-545" src="https://platform.twitter.com/embed/Tweet.html?id=1281102326929874944"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1281102326929874944-545');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1281102326929874944&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Continuous integration ensures easier bug fixes, improves software quality, and reduces project risk. This blog will show what steps we should integrate into our CI pipelines to ensure that we ship better code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A3cmnfnMsSS8u4kfcP1v2wg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A3cmnfnMsSS8u4kfcP1v2wg.png"&gt;&lt;/a&gt;&lt;br&gt;CI Pipeline integrating code quality checks
  &lt;/p&gt;

&lt;p&gt;Traditionally code reviews use to enforce code quality, However checking for things like missing spaces, missing parameters becomes a burden for code reviewers. It would be great if you some tools to automate these checks. We can set some mandatory steps in our CI to run static analysis on the code for every code push. It creates a better development lifecycle by providing early feedback without human intervention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1202%2F1%2AGnbnoXtaOeYZx5ijucW2mA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1202%2F1%2AGnbnoXtaOeYZx5ijucW2mA.png"&gt;&lt;/a&gt;&lt;br&gt;Source: &lt;a href="https://xkcd.com/1285/" rel="noopener noreferrer"&gt;https://xkcd.com/1285/&lt;/a&gt;
  &lt;/p&gt;

&lt;h1&gt;
  
  
  Unit Testing
&lt;/h1&gt;

&lt;p&gt;Unit testing is the process of testing discrete functions at the source code. It is ubiquitous that a CI pipeline contains a test job that verifies your code. If the tests fail, the pipeline fails, and users get notified. It allows fixing the code earlier. Unit tests should be fast and should aim to cover 100% of the codebase. It can give enough confidence that the application is functioning correctly at this point. If unit tests are not automated, the feedback cycle would be slow.&lt;/p&gt;

&lt;h1&gt;
  
  
  Code coverage
&lt;/h1&gt;

&lt;p&gt;Code coverage is a metric that can help you understand how comprehensive are your unit tests. It’s a handy metric that can help you assess the quality of your test suite. Code Coverage Reporting is how we know that all lines of code written have been exercised through testing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Static code analysis
&lt;/h1&gt;

&lt;p&gt;Static code analysis parses and checks the source code and gives feedback about potential issues in code. It acts as a powerful tool to detect common security vulnerabilities, possible runtime errors, and other general coding errors. It can also enforce your coding guidelines or naming conventions along with your maintainability requirements.&lt;/p&gt;

&lt;p&gt;Static code analysis accelerates the feedback cycle in the development process. It gives feedback on new coding issues specific to the branch or commits containing them. It quickly exposes the block of code that we can optimize in terms of quality. By integrating these checks into the CI workflow, we can tackle these code quality issues in the early stages of the delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linting
&lt;/h2&gt;

&lt;p&gt;Linter is a tool that analyzes source code to flag programming errors, bugs, stylistic errors, and suspicious constructs. It helps in enforcing a standard code style.&lt;/p&gt;

&lt;p&gt;We can introduce linter checks in our CI pipelines according to our project setup. There are a vast number of linters out there. Depending on the programming language, there are even more than one linters for each job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A-WosNzXumx9wbyGbgpcIlA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A-WosNzXumx9wbyGbgpcIlA.png"&gt;&lt;/a&gt;&lt;br&gt;Source: &lt;a href="https://xkcd.com/1513/" rel="noopener noreferrer"&gt;https://xkcd.com/1513/&lt;/a&gt;
  &lt;/p&gt;

&lt;h3&gt;
  
  
  Linters for Static Analysis
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.sourcelevel.io/engines/pep8/" rel="noopener noreferrer"&gt;pep8&lt;/a&gt; for Python&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.sourcelevel.io/engines/pmd/" rel="noopener noreferrer"&gt;PMD&lt;/a&gt; for Java&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://eslint.org/" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; for Javascript&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Linters focused on Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.sourcelevel.io/engines/bandit/" rel="noopener noreferrer"&gt;Bandit&lt;/a&gt; for Python&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.sourcelevel.io/engines/nodesecurity/" rel="noopener noreferrer"&gt;Node Security&lt;/a&gt; for JavaScript&lt;/li&gt;
&lt;li&gt;SpotBugs with &lt;a href="https://find-sec-bugs.github.io/" rel="noopener noreferrer"&gt;Find sec bugs&lt;/a&gt; for Java&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Docker lint check
&lt;/h2&gt;

&lt;p&gt;Considering that dockerizing applications are the norm, it is evident how important it is to introduce docker lint checks in our CI pipelines. We should make sure that our docker image generated for our application is optimized and secure.&lt;/p&gt;

&lt;p&gt;There are many open source docker linters available :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/hadolint/hadolint" rel="noopener noreferrer"&gt;hadolint&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/RedCoolBeans/dockerlint" rel="noopener noreferrer"&gt;dockerlint&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Secrets checks
&lt;/h2&gt;

&lt;p&gt;Sometimes developers leak GitHub tokens and various secrets in codebases which should be avoided. We should prevent the leaking of secrets when committing code. We can integrate Yelp’s &lt;a href="https://github.com/Yelp/detect-secrets" rel="noopener noreferrer"&gt;detect-secret&lt;/a&gt; in our workflow, which we can use to scan files for secrets and whitelist false positives to reduce the noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependency Checks
&lt;/h2&gt;

&lt;p&gt;Our code often uses many open source dependencies from public repositories such as maven, PyPI, or npm. These dependencies are maintained by third-party developers who often discover security vulnerabilities in their code. Such vulnerabilities are usually assigned a CVE number and are disclosed publicly to sensitize other developers utilizing their code to update the packages.&lt;/p&gt;

&lt;p&gt;Dependency checkers use information from CVE to check for vulnerable dependencies used in our codebase. There are different tools for this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://snyk.io/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt; for many languages&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://owasp.org/www-project-dependency-check/" rel="noopener noreferrer"&gt;OWASP Dependency-Check&lt;/a&gt; for Java and Python&lt;/li&gt;
&lt;li&gt;npm comes with inbuilt dependency checks&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  All-in-one tools
&lt;/h1&gt;

&lt;p&gt;Some tools aggregate these different static code analysis tools into a single easy to use a package such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.sonarqube.org/" rel="noopener noreferrer"&gt;Sonarqube&lt;/a&gt;: Broad analysis tool&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/returntocorp/semgrep" rel="noopener noreferrer"&gt;Semgrep&lt;/a&gt; tool: Used for go, java, python&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools provide an easy-to-use GUI to find, track and assign issues to developers.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Having the above-discussed steps as a part of your CI/CD pipeline will allow you to monitor, quickly rectify, and grow your code with much higher code quality.&lt;/p&gt;

&lt;p&gt;Originally Posted at &lt;a href="https://medium.com/@_nancychauhan/enforcing-coding-best-practices-using-ci-b3287e362202" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Designing Idempotent APIs</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Fri, 14 May 2021 07:09:11 +0000</pubDate>
      <link>https://dev.to/ladiesindevops/designing-idempotent-apis-17o2</link>
      <guid>https://dev.to/ladiesindevops/designing-idempotent-apis-17o2</guid>
      <description>&lt;p&gt;Networks fail! Timeouts, outages, and routing problems are bound to happen at any time. It challenges us to design our APIs and clients that will be robust in handling failures and ensuring consistency.&lt;/p&gt;

&lt;p&gt;We can design our APIs and systems to be idempotent, which means that they can be called any number of times while guaranteeing that side effects only occur once. Let’s get a deeper dive into why incorporating idempotency is essential, how it works, and how to implement it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why Idempotency is critical in backend applications?
&lt;/h1&gt;

&lt;p&gt;Consider the design of a social networking site like Instagram where a user can share a post with all their followers. Let’s assume that we are hosting the app-server and database-server on two different machines for better performance and scalability. And also, we are using PostgreSQL to store the data. A post and creating a post will have the following model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE public.posts (
   id int(11) PRIMARY KEY,
   user_id int(11) REFERENCES users,
   image_id int(11) REFERENCES images NULL,
   content character varying(2048) COLLATE pg_catalog."default",
   create_timestamp timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Failures and Retries
&lt;/h2&gt;

&lt;p&gt;If we have our database on a separate server from our application server, sometimes posts will fail because of network issues. There could be the following issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The initial connection could fail as the application server tries to connect to a database server.&lt;/li&gt;
&lt;li&gt;The call could fail midway while the app server is fulfilling the operation, leaving the work in limbo.&lt;/li&gt;
&lt;li&gt;The call could succeed, but the connection breaks before the database server can tell the application server about it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AgtfaDxb3P6Ut-rznIwy78g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AgtfaDxb3P6Ut-rznIwy78g.png" alt="Retry"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can fix this with retry logic, but it is very tough to suspect the real cause of network failure. Hence it could lead to a scenario where the entry of the post has been made in the database but it could not send the ACK to the app server. Here app server unknowingly keeps retrying and creating duplicate posts. This would eventually lead to business loss. There are many other critical systems like payments, shopping sites, where idempotent systems are quite important.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;The solution to this is to retry, but make the operation idempotent. If an operation is idempotent, the app server can make that same call repeatedly while producing the same result.&lt;/p&gt;

&lt;p&gt;In our design, we can use universally unique identifiers. Each post will be given its own UUID by our application server. We can change our models to have a unique key constraint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
CREATE TABLE public.posts (
   id uuid PRIMARY KEY,
   user_id uuid REFERENCES users,
   image_id uuid REFERENCES images NULL,
   content character varying(2048) COLLATE pg_catalog."default",
   create_timestamp timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO posts (id, user_id, image_id, content)
VALUES ("DC2FB40E-058F-4208-B9A3-EB1790C532C8", "20C5ADC5-D1A5-4A1F-800F-1AADD1E4E954", "3CC32CAE-B6AC-4C53-97EC-25EB49F2E7F3", "Hello-world") RETURNING id ON CONFLICT DO NOTHING;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our application server will generate the UUID when it wants to create a post and retry the Insert statement until it gets a successful response from a database server. We need to change our system to handle constraint violations and return the existing post. Hence, there will always be exactly one post created.&lt;/p&gt;

&lt;h1&gt;
  
  
  Idempotency in HTTP
&lt;/h1&gt;

&lt;p&gt;One of the important aspects of HTTP is the concept that some methods are idempotent. Take GET for an example, how many times you may call the GET method it results in the same outcome. On the other hand, POST is not expected to be an idempotent method, calling it multiple times may result in incorrect updates.&lt;/p&gt;

&lt;p&gt;Safe methods don’t change the representation of the resource in the server e.g. GET method should not change the content of the page your accessing. They are read-only methods while the PUT method will update the page but will be idempotent in nature. To be idempotent, only the actual back-end state of the server is considered, the status code returned by each request may differ: the first call of a DELETE will likely return a 200, while successive ones will likely return a 404.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DELETE /idX/delete HTTP/1.1   -&amp;gt; Returns 200 if idX exists
DELETE /idX/delete HTTP/1.1   -&amp;gt; Returns 404 as it just got deleted
DELETE /idX/delete HTTP/1.1   -&amp;gt; Returns 404
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;GET is both safe and idempotent.&lt;/li&gt;
&lt;li&gt;HEAD is also both safe and idempotent.&lt;/li&gt;
&lt;li&gt;OPTIONS is also safe and idempotent.&lt;/li&gt;
&lt;li&gt;PUT is not safe but idempotent.&lt;/li&gt;
&lt;li&gt;DELETE is not safe but idempotent.&lt;/li&gt;
&lt;li&gt;POST is neither safe nor idempotent.&lt;/li&gt;
&lt;li&gt;PATCH is also neither safe nor idempotent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The HTTP specification defines certain methods to be idempotent but it is up to the server to actually implement it. For example, send a request-id header with a UUID which the server uses to deduplicate PUT request. If you are serving a GET request, we should not change the server-side data.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Designing idempotent systems is important for building a resilient microservice-based architecture. This helps in solving a lot of problems caused due to the network which is inherently lossy. By leveraging an idempotent queue such as Kafka, it makes sure your operations can be retried in case of a long outage. This helps you to design systems that never lose data and any missing data can be adjusted by replaying the message queue. If all operations are idempotent it will result in the same state regardless of how many times messages are processed.&lt;/p&gt;

&lt;p&gt;Originally published at &lt;a href="https://medium.com/@_nancychauhan/idempotency-in-api-design-bc4ea812a881" rel="noopener noreferrer"&gt;https://medium.com/@_nancychauhan/idempotency-in-api-design-bc4ea812a881&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Top 10 Productive Hacks for Software Developers</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Tue, 04 May 2021 13:44:03 +0000</pubDate>
      <link>https://dev.to/_nancychauhan/top-10-productive-hacks-for-software-developers-4f5g</link>
      <guid>https://dev.to/_nancychauhan/top-10-productive-hacks-for-software-developers-4f5g</guid>
      <description>&lt;p&gt;Recently, I posted a tweet asking all the amazing software developers in my network to tell about the hacks they use to keep themselves productive. I have compiled the wonderful and helpful solutions I received. Hope you find them useful as well.&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--RCIm7f4l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1383714961264766982/77mniC97_normal.jpg" alt="Nancy Chauhan profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Nancy Chauhan
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="mentioned-user" href="https://dev.to/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt;

      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      What hacks do you follow as a software developer that enhance your productivity or make your life easier? For instance, Bash aliases for frequently used commands works wonders for me.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      12:18 PM - 14 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382307259854716931" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382307259854716931" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382307259854716931" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;When I started my tech career, I found that I often had to do a lot of repetitive and manual work. I wanted to improve my skills and boost my productivity.&lt;br&gt;
When I met other developers, I discovered various hacks. I was always amazed that how easy things can become.&lt;/p&gt;
&lt;h2&gt;
  
  
  Automation
&lt;/h2&gt;

&lt;p&gt;We usually repeat a set of tasks every day which are time-consuming. You can automate them. Tasks such as compiling code after minute changes or migrating data into databases after minor modifications can be automated. Following are the few takeaways :&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--jJulHDDg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1209893961608900610/Me9V-5qL_normal.jpg" alt="Jakub Pomykała 👋 profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Jakub Pomykała 👋
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @jakub_pomykala
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt; I use a bash script to reload test data on the local SQL database or stop all containers and set up a new stack. But the thing that really makes a trick is that I’m using the touch bar on my MacBook to run those scripts.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      01:57 AM - 15 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382513318699618304" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382513318699618304" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382513318699618304" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;ul&gt;
&lt;li&gt;Bespoke python or bash scripts for automating jobs at work&lt;/li&gt;
&lt;li&gt;Use cookie-cutter templates to automate project creation. Example: &lt;a href="https://github.com/cookiecutter-flask/cookiecutter-flask"&gt;https://github.com/cookiecutter-flask/cookiecutter-flask&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;This &lt;a href="https://suraj.io/post/framework-for-scripts-and-binaries/"&gt;blog&lt;/a&gt; is an amazing piece about managing the scripts and binaries downloaded randomly from the internet.&lt;/li&gt;
&lt;li&gt;Tools like &lt;a href="https://espanso.org/"&gt;espanso&lt;/a&gt; allow substituting strings like date to present date, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Fish or Zsh with Multiplexing terminals
&lt;/h2&gt;

&lt;p&gt;A terminal multiplexer is a program that allows its user to multiplex one or more virtual sessions, so the user can have several sessions inside one single terminal like tmux, iterm, terminator etc. It is a must-have in your toolkit to gain greater control over the shells by working with a terminal multiplexer.&lt;br&gt;
There are a lot of widely available shells such as &lt;a href="https://www.gnu.org/software/bash/"&gt;Bash&lt;/a&gt;, &lt;a href="https://ohmyz.sh/"&gt;Zsh&lt;/a&gt;, and &lt;a href="https://fishshell.com/"&gt;fish&lt;/a&gt; shells. Switching to fish or even Zsh is one of the best things you could do to make your programming experience more pleasant. It’s faster and much more customizable than Bash.&lt;/p&gt;
&lt;h3&gt;
  
  
  Zsh &amp;amp; Fish
&lt;/h3&gt;

&lt;p&gt;Zsh has many useful features, including spelling correction, sharing your command history across multiple terminals, naming directory shortcuts, etc.&lt;br&gt;
For productivity, I use iTerm + ZSH + oh-my-zsh. You can enrich ZSH by using the &lt;a href="https://ohmyz.sh/"&gt;Oh My ZSH&lt;/a&gt; framework, which provides some functionality that will boost your efficiency. A few of my favorite plugins are:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/zsh-users/zsh-autosuggestions"&gt;zsh-autosuggestions&lt;/a&gt;: suggests commands as you type based on history and completions&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/zsh-users/zsh-syntax-highlighting"&gt;zsh-syntax-highlighting&lt;/a&gt;: provides syntax highlighting for the shell zsh, red for invalid, and green for valid commands:&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/agkozak/zsh-z"&gt;ZSH-z&lt;/a&gt;: It is a command-line tool that allows you to jump quickly to directories that you have visited frequently in the past or recently.&lt;/li&gt;
&lt;/ul&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--KbboGvoH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1186955851770187776/JwU2yLit_normal.jpg" alt="Varun profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Varun
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @asdeyquote
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt; fish shell. startup time for the prompt to show up in zshell is enough to lose my chain of thoughts. plus most of the plugins that you have to install with zsh comes out of the box with fish.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      15:04 PM - 15 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382711454483116032" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382711454483116032" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382711454483116032" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;On the other hand, Fish is full of awesome features that will skyrocket your productivity to a different level altogether. It is &lt;a href="https://fishshell.com/docs/current/index.html"&gt;extremely well documented&lt;/a&gt; and it’s &lt;a href="https://fishshell.com/"&gt;easy to install&lt;/a&gt; too. I want to use it as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use and know your IDE very well
&lt;/h2&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--EqPt1hs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1356609641694683136/AiPih3mS_normal.png" alt="Kushal Das profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Kushal Das
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @kushaldas
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt; [0] Touch typing. [1] Learn any proper text editor.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      07:19 AM - 15 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382594330305646593" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382594330305646593" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382594330305646593" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;You should learn to use your favourite IDE efficiently. If you get to know the features and capabilities of IDE very well, you can significantly improve your productivity. For me, that mainly involves becoming familiar with the most commonly used commands and learning their key shortcuts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clipboard manager
&lt;/h2&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;
      &lt;div class="ltag__twitter-tweet__media"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ilYdWvoj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/Ey_33ebVgAEUB6D.jpg" alt="unknown tweet media content"&gt;
      &lt;/div&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--tnUKGEJi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/799664820848992256/QX3Pjg3V_normal.jpg" alt="Arnav Gupta profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Arnav Gupta
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @championswimmer
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt; I use this tool called Clipy (there's a similar called Clipmenu too) which keeps your last 30 cut/copies items. Not just text but images too. &lt;br&gt;&lt;br&gt;When copying 4-5 items from one place to another I can first copy them all, then paste in one go. 
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      07:26 AM - 15 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382596147705905153" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382596147705905153" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382596147705905153" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;Clipboard managers can be extremely useful and increase your productivity. Clipboard manager can be used to recall text that have been copied such as webservice endpoints, test user names and passwords, and code snippets. We can use different tools like Clipy, Klipper, Clipman.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gmail labels, and filters to keep annoying mails out of life
&lt;/h2&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qe0Tu2f6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1053355623612481536/JsU2oWOK_normal.jpg" alt="Amitosh Swain Mahapatra profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Amitosh Swain Mahapatra
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @recrsn
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt; fzf&lt;br&gt;bespoke python scripts for automating jobs at my work&lt;br&gt;dig to internal DNS server to find a VM by IP/name&lt;br&gt;cookie-cutter templates for some projects (shared with team as well)&lt;br&gt;Clipboard manager&lt;br&gt;Gmail labels, and filters to keep annoying mails out of life
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      16:29 PM - 14 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382370358313897984" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382370358313897984" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382370358313897984" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;A filter can save your time and space, rid your inbox of unwanted emails, and turn your Gmail into a multi-functional tool with simple filters.&lt;br&gt;
This way, you can filter out spam and emails of less priority and save emails that you wouldn’t want to miss. For instance, you can have Gmail filter out and label all emails from a specific email address. You will then have them conveniently organized under one label. Gmail shares how to create, edit and delete labels &lt;a href="https://support.google.com/mail/answer/118708?co=GENIE.Platform%3DAndroid&amp;amp;hl=en"&gt;here&lt;/a&gt; and how to use filters &lt;a href="https://support.google.com/mail/answer/6579?hl=en"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Nap and Breaks
&lt;/h2&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--3v-fcsxM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1183674647465754624/GREHN6Ch_normal.jpg" alt="Arihant Verma profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Arihant Verma
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="mentioned-user" href="https://dev.to/gdadsriver"&gt;@gdadsriver&lt;/a&gt;

      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt; Taking a nap
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      12:24 PM - 14 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382308737092780033" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382308737092780033" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382308737092780033" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;While it’s important to focus on work, it’s as important to focus on your rest. You should take frequent breaks. The Pomodoro technique is something that I want to try for a long time. Every focus session should have at least a 5 to 10-minute break. Also, I believe exercise is quite important as well. It keeps your day productive.&lt;/p&gt;

&lt;h2&gt;
  
  
  A place to put all tasks, Github issues, Jira tickets, private stuff
&lt;/h2&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--FM28lKhI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1224987243556888578/EkXISGKi_normal.jpg" alt="Maciej Walkowiak 🍃 profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Maciej Walkowiak 🍃
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="mentioned-user" href="https://dev.to/maciejwalkowiak"&gt;@maciejwalkowiak&lt;/a&gt;

      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt; I put all my tasks, github issues, jira tickets, private stuff to &lt;a href="https://twitter.com/MicrosoftToDo"&gt;@MicrosoftToDo&lt;/a&gt; so that i can quickly pick things to do without thinking and searching for them. Works well when waiting for build etc
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      15:39 PM - 14 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382357853005037571" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382357853005037571" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382357853005037571" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;You can use various tools or ways to accumulate and organize all your tasks and learnings in one place. You can jump to another task when blocked on some task due to some reason. Also, you can maintain short notes and learnings which you might need for future reference.&lt;br&gt;
Some of the tools which you can use are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Taskwarrior&lt;/li&gt;
&lt;li&gt;Notion&lt;/li&gt;
&lt;li&gt;Microsoft to do&lt;/li&gt;
&lt;li&gt;Short notes and learnings (I maintain short notes of my learnings on &lt;a href="https://github.com/Nancy-Chauhan/Today-I-Learnt"&gt;Github&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Nancy-Chauhan"&gt;
        Nancy-Chauhan
      &lt;/a&gt; / &lt;a href="https://github.com/Nancy-Chauhan/Today-I-Learnt"&gt;
        Today-I-Learnt
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h2&gt;
TIL: 26-02-2021&lt;/h2&gt;
&lt;p&gt;DHCP Protocol is used to assign IP addresses
It is present in router and it is also used in  Docker, Kubernetes, AWS and etc.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Docker:  By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network, so the Docker daemon effectively acts as a DHCP server for each container.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
TIL: 07-03-2021&lt;/h2&gt;
&lt;p&gt;Idempotency: Idempotence is talked about a lot in the context of "RESTful" web services. Basically, if making multiple identical requests has the same effect as making a single request it is idempotency.&lt;/p&gt;
&lt;h2&gt;
TIL: 08-03-2021&lt;/h2&gt;
&lt;p&gt;Interface vs abstract class in Java&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When to use what?&lt;/li&gt;
&lt;li&gt;Interfaces can be used if we want a full implementation and use abstract classes when you want partial pieces for your design.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
TIL: 10-03-2021&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Problem : You are calling resource even when it is not needed…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Nancy-Chauhan/Today-I-Learnt"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Touch typing
&lt;/h2&gt;

&lt;p&gt;It provides the ability to work more ergonomically. It helps in syncing code/any work with thoughts. Also, as a developer, we are not only focused on writing actual code. There are many more things like code reviews, documentation, and slack messages that depend on typing to contribute to a product, team or discussion. For programmers, the main benefit of touch-typing comes from being able to spot typing errors as soon as they occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux Commands and Shortcuts
&lt;/h2&gt;

&lt;p&gt;Learning Linux commands and shortcuts helps a lot. Linux command tricks will save you a lot of time and, in some cases, from plenty of frustration. Some of my favourite tricks are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using grep when dealing with large chunks of data&lt;/li&gt;
&lt;li&gt;Using alias to fix typos or setup various keys&lt;/li&gt;
&lt;li&gt;Using tab for autocompletion&lt;/li&gt;
&lt;li&gt;Easily search and use the command you have used in the past with &lt;code&gt;Ctrl + r search_term&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Focus
&lt;/h2&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--gXQKVTRR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1271529790223179776/AfyGc4qj_normal.jpg" alt="Avinash Jain profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Avinash Jain
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="mentioned-user" href="https://dev.to/logicbomb_1"&gt;@logicbomb_1&lt;/a&gt;

      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/_nancychauhan"&gt;@_nancychauhan&lt;/a&gt; - using google addon to block social networking sites during office hours.&lt;br&gt;- using &lt;a href="https://t.co/dMNmZ3oEEX"&gt;github.com/httpie/httpie&lt;/a&gt; instead of curl.&lt;br&gt;- play science games &lt;a href="https://t.co/1f7AiPGZtC"&gt;testtubegames.com&lt;/a&gt; when I get bored working. &lt;br&gt;- using Gmail multiple inbox feature to keep mails which I need to respond separate.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      17:13 PM - 14 Apr 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1382381540408139776" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1382381540408139776" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1382381540408139776" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;We can use various addons to block the usage of social networking sites and keep the distractions away.&lt;/p&gt;

&lt;p&gt;In the end, I am thankful to all the developers for their valuable inputs.&lt;/p&gt;

&lt;p&gt;Originally Published on &lt;a href="https://medium.com/@_nancychauhan/top-10-productive-hacks-for-software-developers-c0feb8ca8dab"&gt;Medium&lt;/a&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Building a Prometheus Exporter</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Mon, 03 May 2021 11:34:39 +0000</pubDate>
      <link>https://dev.to/ladiesindevops/building-a-prometheus-exporter-1cb9</link>
      <guid>https://dev.to/ladiesindevops/building-a-prometheus-exporter-1cb9</guid>
      <description>&lt;p&gt;&lt;a href="https://prometheus.io/docs/introduction/overview/"&gt;Prometheus&lt;/a&gt; is an open-source monitoring tool for collecting metrics from your application and infrastructure. As one of the foundations of the cloud-native environment, Prometheus has become the de-facto standard for visibility in the cloud-native landscape.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Prometheus Works?
&lt;/h1&gt;

&lt;p&gt;Prometheus is a &lt;a href="https://www.influxdata.com/time-series-database/"&gt;time-series database&lt;/a&gt; and a pull-based monitoring system. It periodically scrapes HTTP endpoints (targets) to retrieve metrics. It can monitor targets such as servers, databases, standalone virtual machines, etc.&lt;br&gt;
Prometheus read metrics exposed by target using a simple &lt;a href="https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format"&gt;text-based&lt;/a&gt; exposition format. There are client libraries that help your application to expose metrics in Prometheus format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K2vVJzzp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AUhSRulXaVEDoQQRL4nPu6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K2vVJzzp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AUhSRulXaVEDoQQRL4nPu6g.png" alt="How Prometheus Works?"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Prometheus Metrics
&lt;/h1&gt;

&lt;p&gt;While working with Prometheus it is important to know about Prometheus metrics. These are the four types of metrics that will help in instrumenting your application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Counter (the only way is up): Use counters for counting events, jobs, money, HTTP request, etc. where a cumulative value is useful.&lt;/li&gt;
&lt;li&gt;Gauges (the current picture): Use where the current value is important — CPU, RAM, JVM memory usage, queue levels, etc.&lt;/li&gt;
&lt;li&gt;Histograms (Sampling Observations): Generally use with timings, where an overall picture over a time frame is required — query times, HTTP response times.&lt;/li&gt;
&lt;li&gt;Summaries (client-side quantiles): Similar in spirit to the Histogram, with the difference being that quantiles are calculated on the client-side as well. Use when you start using quantile values frequently with one or more histogram metrics.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Using Prometheus
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus provides client libraries that you can use to add instrumentation to your applications.&lt;/li&gt;
&lt;li&gt;The client library exposes your metrics at URLs such as &lt;a href="http://localhost:8000/metrics"&gt;http://localhost:8000/metrics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Configure the URL as one of the targets in Prometheus. Prometheus will now scrape metrics in periodic intervals. You can use visualization tools such as Grafana to view your metrics or configure alerts using Alertmanager via custom rules defined in configuration files.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Prometheus Exporters
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rfWelX0B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A1OpRRb67QvRVg4nx" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rfWelX0B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A1OpRRb67QvRVg4nx" alt="Exporter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus has a huge ecosystem of &lt;a href="https://awesomeopensource.com/projects/prometheus-exporter"&gt;exporters&lt;/a&gt;. Prometheus exporters bridge the gap between Prometheus and applications that don’t export metrics in the Prometheus format. For example, Linux does not expose Prometheus-formatted metrics. That’s why Prometheus exporters, like &lt;a href="https://github.com/prometheus/node_exporter"&gt;the node exporter&lt;/a&gt;, exist.&lt;/p&gt;

&lt;p&gt;Some applications like Spring Boot, Kubernetes, etc. expose Prometheus metrics out of the box. On the other hand, exporters consume metrics from an existing source and utilize the Prometheus client library to export metrics to Prometheus.&lt;/p&gt;

&lt;p&gt;Prometheus exporters can be stateful or stateless. A stateful exporter is responsible for gathering data and exports them using the general metrics format such as counter, gauge, etc. Stateless exporters are exporters that translate metrics from one format to Prometheus metrics format using counter metric family, gauge metric family, etc. They do not maintain any local state instead they show a view derived from another metric source such as JMX. For example, Jenkins Jobmon is a Prometheus exporter for Jenkins which calls Jenkins API to fetch the metrics on every scrape.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/grofers"&gt;
        grofers
      &lt;/a&gt; / &lt;a href="https://github.com/grofers/jenkins-jobmon"&gt;
        jenkins-jobmon
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Prometheus exporter to monitor Jenkins jobs
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Jenkins Jobmon&lt;/h1&gt;
&lt;p&gt;&lt;a href="https://github.com/grofers/jenkins-jobmon/actions"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cRWrnscq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/grofers/jenkins-jobmon/workflows/ci/badge.svg" alt="CI Actions Status"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Jenkins exporter for Prometheus in python.&lt;/p&gt;
&lt;p&gt;It uses &lt;a href="https://github.com/prometheus/client_python#custom-collectors"&gt;Prometheus custom collector API&lt;/a&gt;, which allows making custom
collectors by proxying metrics from other systems.&lt;/p&gt;
&lt;p&gt;Currently we fetch following metrics:&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Labels&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_total_duration_seconds_sum&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build total duration in millis&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_fail_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build fail counts&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_total_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build total counts&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_pass_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build pass counts&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_pending_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build pending counts&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_stage_duration&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build stage duration in ms&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;, &lt;code&gt;stagename&lt;/code&gt;, &lt;code&gt;build&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_stage_pass_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Counter&lt;/td&gt;
&lt;td&gt;Jenkins build stage pass count&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;, &lt;code&gt;stagename&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_stage_fail_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Counter&lt;/td&gt;
&lt;td&gt;Jenkins build stage fail count&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;, &lt;code&gt;stagename&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
Usage&lt;/h2&gt;
&lt;h3&gt;
Configuration&lt;/h3&gt;
&lt;p&gt;Create a file &lt;code&gt;config.yml&lt;/code&gt; using this template:&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml position-relative js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-ent"&gt;jobs&lt;/span&gt;
  &lt;span class="pl-ent"&gt;example&lt;/span&gt;:          &lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/grofers/jenkins-jobmon"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h1&gt;
  
  
  Let’s build a generic HTTP server metrics exporter!
&lt;/h1&gt;

&lt;p&gt;We will build a Prometheus exporter for monitoring HTTP servers from logs. It extracts data from HTTP logs and exports it to Prometheus. We will be using a &lt;a href="https://github.com/prometheus/client_python"&gt;python client library&lt;/a&gt;, &lt;code&gt;prometheus_client&lt;/code&gt;, to define and expose metrics via an HTTP endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RBqc8oLO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AtnVyecPLcTgwQY0LbChBxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RBqc8oLO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AtnVyecPLcTgwQY0LbChBxw.png" alt="One of the metrics from httpd_exporter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our HTTP exporter will repeatedly follow server logs to extract useful information such as HTTP requests, status codes, bytes transferred, and requests timing information. HTTP logs are structured and standardized across different servers such as Apache, Nginx, etc. You can read more about it from &lt;a href="https://publib.boulder.ibm.com/tividd/td/ITWSA/ITWSA_info45/en_US/HTML/guide/c-logs.html"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;We will use a counter metric to store the HTTP requests using status code as a label.&lt;/li&gt;
&lt;li&gt;We will use a counter metric to store bytes transferred.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is the script which collects data from apache logs indefinitely and exposes metrics to Prometheus :&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;follow_log&lt;/code&gt; function tails apache logs stored var/log/apache in your system infinitely. &lt;code&gt;gather_metrics()&lt;/code&gt; uses a regular expression to fetch the useful information from logs like status_code and total_bytes_sent and accordingly increments the counters.&lt;/p&gt;

&lt;p&gt;If you run the script, it will start the server at &lt;a href="http://localhost:8000"&gt;http://localhost:8000&lt;/a&gt; The collected metrics will show up there. Setup &lt;a href="https://github.com/Nancy-Chauhan/httpd_exporter/blob/master/prometheus/prometheus.yml"&gt;Prometheus&lt;/a&gt; to scrape the endpoint. Over time, Prometheus will build the time-series for the metrics collected. Setup &lt;a href="https://github.com/Nancy-Chauhan/httpd_exporter/blob/master/docker-compose.yml"&gt;Grafana&lt;/a&gt; to visualize the data within Prometheus.&lt;/p&gt;

&lt;p&gt;You can find the code here and run the exporter:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Nancy-Chauhan"&gt;
        Nancy-Chauhan
      &lt;/a&gt; / &lt;a href="https://github.com/Nancy-Chauhan/httpd_exporter"&gt;
        httpd_exporter
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Prometheus exporter for monitoring apache
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
httpd_exporter&lt;/h1&gt;
&lt;p&gt;Prometheus exporter for monitoring http servers from logs.&lt;/p&gt;
&lt;p&gt;It extracts data from http logs and export to prometheus.&lt;/p&gt;
&lt;h3&gt;
Requirements&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;python 3.6 +&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
Usage&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Clone the repo&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;docker-compose up&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
Grafana Dashboard&lt;/h2&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://raw.githubusercontent.com/Nancy-Chauhan/httpd_exporter/master/docs/grafana_dashboard.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W-VP-KO2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Nancy-Chauhan/httpd_exporter/master/docs/grafana_dashboard.png" alt="Grafana Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Nancy-Chauhan/httpd_exporter"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;Originally Posted at &lt;a href="https://medium.com/@_nancychauhan/building-a-prometheus-exporter-8a4bbc3825f5"&gt;https://medium.com/@_nancychauhan/building-a-prometheus-exporter-8a4bbc3825f5&lt;/a&gt; &lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Introduction to Message Queue: Build a newsletter app using Django, Celery, and RabbitMQ in 30 min
</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Wed, 28 Apr 2021 06:57:19 +0000</pubDate>
      <link>https://dev.to/_nancychauhan/introduction-to-message-queue-build-a-newsletter-app-using-django-celery-and-rabbitmq-in-30-min-60p</link>
      <guid>https://dev.to/_nancychauhan/introduction-to-message-queue-build-a-newsletter-app-using-django-celery-and-rabbitmq-in-30-min-60p</guid>
      <description>&lt;p&gt;Messaging Queues are widely used in asynchronous systems. In a data-intensive application using queues makes sure users have a fast experience while still completing complicated tasks. For instance, you can show a progress bar in your UI while your task is being completed in the background. This allows the user to relieve themselves from waiting for a task to complete and, hence, can do other jobs during that time.&lt;/p&gt;

&lt;p&gt;A typical request-response architecture doesn’t cut where response time is unpredictable because you have many long-running requests coming. If you are sure that your systems request will exponentially or polynomially go large, a queue could be very beneficial.&lt;/p&gt;

&lt;p&gt;Messaging queues provide useful features such as persistence, routing, and task management. Message queues are typical ‘brokers’ that facilitate message passing by providing an interface that other services can access. This interface connects producers who create messages and the consumers who then process them.&lt;/p&gt;

&lt;p&gt;We will build a newsletter app, where users can subscribe to various newsletters and they will receive the issues regularly on their emails. But before we proceed let’s understand the working of workers + message queues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1044%2F0%2AT2hl9WirMLj8Hv2u" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1044%2F0%2AT2hl9WirMLj8Hv2u" alt="Message Queue"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Workers &amp;amp; Message Queues
&lt;/h1&gt;

&lt;p&gt;Workers are “background task servers”. While your web server is responding to user requests, the worker servers can process tasks in the background. These workers can be used for sending emails, making large changes in the database, processing files, etc.&lt;/p&gt;

&lt;p&gt;Workers are assigned tasks via a message queue. For instance, consider a queue storing a lot of messages. It will be processed in a first-in, first-out (FIFO) fashion. When a worker becomes available, it takes the first task from the front of the queue and begins processing. If we have many workers, each one takes a task in order. The queue ensures that each worker only gets one task at a time and that each task is only being processed by one worker.&lt;/p&gt;

&lt;p&gt;We will use Celery which is a task queue implementation for Python web applications used to asynchronously execute work outside the HTTP request-response cycle. We will also use RabbitMQ, which is the most widely deployed open-source message broker. It supports multiple messaging protocols.&lt;/p&gt;

&lt;h1&gt;
  
  
  Build Newsletter App
&lt;/h1&gt;

&lt;p&gt;We will build a newsletter app where a user can subscribe to various newsletters simultaneously and will receive the issues over their emails regularly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1232%2F1%2AHTLkTq7wcYkQuxpdGORYVw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1232%2F1%2AHTLkTq7wcYkQuxpdGORYVw.png" alt="Product"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will have our newsletter app running as a Django app with celery. Whenever authors publish a new issue the Django app will publish a message to email the issue to the subscribers using celery. Celery workers will receive the task from the broker and start sending emails.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1260%2F1%2AUrWTsL6WirrwbLHmxxPURg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1260%2F1%2AUrWTsL6WirrwbLHmxxPURg.png" alt="Infra"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3+ version&lt;/li&gt;
&lt;li&gt;Pipenv&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup Django
&lt;/h2&gt;

&lt;p&gt;Create a folder &lt;code&gt;newsletter&lt;/code&gt; locally and install Django in a virtual environment. Inside folder run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

pipenv shell
pipenv install django


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create an app:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;django-admin startproject newsletter_site .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Setup the models: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;python manage.py migrate&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Make sure it works and visit &lt;a href="http://127.0.0.1:8000/" rel="noopener noreferrer"&gt;http://127.0.0.1:8000/&lt;/a&gt; :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python manage.py runserver 8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create the newsletter app:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python manage.py startapp newsletter&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Install celery&lt;/li&gt;
&lt;li&gt;Install dotenv for reading settings from the environment.&lt;/li&gt;
&lt;li&gt;Install psycopg2-binary for connecting with Postgres.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

pipenv install celery
pipenv install python-dotenv
pipenv install psycopg2-binary


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Setup Postgres and RabbitMQ
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;docker-compose.yaml&lt;/code&gt; to run Postgres and Rabbitmq in the background.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

version: '3'
services:
  db:
    image: postgres:13
    env_file:
      - .env
    ports:
      - 5432:5432
  rabbitmq:
    image: rabbitmq
    ports:
      - 5672:5672


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Configuring settings.py
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;To include the app in our project, we need to add a reference to its configuration class in the INSTALLED_APPS setting in &lt;code&gt;newsletter_site/settings.py&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

INSTALLED_APPS = [
    'newsletter.apps.NewsletterConfig',
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We need to tell Celery how to find RabbitMQ. So, open &lt;code&gt;settings.py&lt;/code&gt; and add this line:&lt;br&gt;
&lt;code&gt;CELERY_BROKER_URL = os.getenv('CELERY_BROKER_URL')&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We need to configure database settings: &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

uri = os.getenv('DATABASE_URL')

result = urlparse(uri)

database = result.path[1:]
user = result.username
password = result.password
host = result.hostname
port = result.port

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': database,
        'USER': user,
        'PASSWORD': password,
        'HOST': host,
        'PORT': port,
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;We need to configure the SMTP server in settings.py . SMTP server is the mail server responsible to deliver emails to the users. For development, you may use a Gmail SMTP server, but this has limits and will not work if you have 2 FA. You can refer to this &lt;a href="https://dev.to/abderrahmanemustapha/how-to-send-email-with-django-and-gmail-in-production-the-right-way-24ab"&gt;article&lt;/a&gt;. For production, you can use commercial services such as sendgrid.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = os.getenv('EMAIL_HOST')
EMAIL_USE_TLS = bool(os.getenv('EMAIL_USE_TLS'))
EMAIL_PORT = os.getenv('EMAIL_PORT')
EMAIL_HOST_USER = os.getenv('EMAIL_HOST_USER')
EMAIL_HOST_PASSWORD = os.getenv('EMAIL_HOST_PASSWORD')


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For your reference, you can see the settings.py &lt;a href="https://github.com/Nancy-Chauhan/newsletter/blob/main/newsletter_site/settings.py" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Create &lt;code&gt;.env&lt;/code&gt; file
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;.env&lt;/code&gt; file and assign the secrets.
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EMAIL_USE_TLS=True&lt;br&gt;
EMAIL_PORT={EMAIL_PORT}&lt;br&gt;
EMAIL_HOST_USER={EMAIL_HOST_USER}&lt;br&gt;
EMAIL_HOST_PASSWORD={EMAIL_HOST_PASSWORD}&lt;br&gt;
CELERY_BROKER_URL="pyamqp://localhost:5672"&lt;br&gt;
SECRET_KEY={SECRET_KEY}&lt;br&gt;
DATABASE_URL=postgres://postgres:password@localhost:5432/postgres&lt;br&gt;
POSTGRES_PASSWORD=password&lt;br&gt;
APP_DOMAIN=*&lt;br&gt;
DEBUG=True&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
## Celery

We need to set up Celery with some config options. Create a new file called `celery.py` inside `newseletter_site` directory :

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;import os&lt;/p&gt;

&lt;p&gt;from celery import Celery&lt;/p&gt;

&lt;p&gt;os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'newsletter_site.settings')&lt;/p&gt;

&lt;p&gt;app = Celery('newsletter_site')&lt;/p&gt;

&lt;p&gt;app.config_from_object('django.conf:settings', namespace='CELERY')&lt;/p&gt;

&lt;p&gt;app.autodiscover_tasks()&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
## Design and Implement Models &amp;amp; Configure Admin
This is the schema we are trying to build. The schema is implemented [here](https://github.com/Nancy-Chauhan/newsletter/blob/main/newsletter/models.py). Create a `newsletter/models.py` with the same content.

![Schema design](https://miro.medium.com/max/1400/1*1SYrd7LgJ-nonMft0TBJ4g.png)

We need a UI to manage the newsletter. We will be using Django Admin for this purpose. Create a `newsletter/admin.py` with the contents of this file.

Register URL for admin in `newsletter_site/urls.py`:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;urlpatterns = [&lt;br&gt;
    path('admin/', admin.site.urls),&lt;br&gt;
]&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Run the app

Run docker-compose to start the dependencies:
`docker-compose up`

Generate migrations for our models:
`python manage.py makemigrations`

To apply generated migrations to database run:
`python manage.py migrate `

To create a user for login run the following command and provide your details:
`python manage.py createsuperuser`

Run the following command to run the app and open http://127.0.0.1:8000/admin to open Django Admin :
`python manage.py runserver`

![Django Admin](https://miro.medium.com/max/3836/1*H4Vbp888k0K-4S5cImov6Q.png)

Run celery:
`celery -A newsletter_site worker --loglevel=INFO`
Add a newsletter and a subscriber and subscribe them to it. Create an issue and send it. If everything is fine you will see an issue arriving in your email.

#How does it work?

When we click send the following action gets executed:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;def send(modeladmin, request, queryset):&lt;br&gt;
    for issue in queryset:&lt;br&gt;
        tasks.send_issue.delay(issue.id)&lt;/p&gt;

&lt;p&gt;send.short_description = "send"&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
This code is responsible to queue up a new task to send an issue using celery. It publishes the task to RabbitMQ.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;@shared_task()&lt;br&gt;
def send_issue(issue_id):&lt;br&gt;
    issue = Issue.objects.get(pk=issue_id)&lt;br&gt;
    for subscription in Subscription.objects.filter(newsletter=issue.newsletter):&lt;br&gt;
        send_email.delay(subscription.subscriber.email, issue.title, issue.content)&lt;/p&gt;

&lt;p&gt;@shared_task()&lt;br&gt;
def send_email(email, title, content):&lt;br&gt;
    send_mail(&lt;br&gt;
        title,&lt;br&gt;
        content,&lt;br&gt;
        '&lt;a href="mailto:newsletters@nancychauhan.in"&gt;newsletters@nancychauhan.in&lt;/a&gt;',&lt;br&gt;
        [email],&lt;br&gt;
        fail_silently=False,&lt;br&gt;
    )&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
The Celery worker uses these tasks. When the producer publishes a task the worker runs the corresponding task.
When we publish the send_issue task we determine the subscriber for the newsletter and publish sub-tasks to send the actual email. This strategy is called fan-out. Fan out is useful as it allows us to retry sending emails to a single user in case of a failure.

# Conclusion

In this post, we saw how to use RabbitMQ as a message queue with Celery and Django to send bulk emails. This is a good fit where message queues are appropriate. Use message queue if the request is indeterministic or the process is long-running and resource-intensive.

You can find the finished project [here](https://github.com/Nancy-Chauhan/newsletter):

Originally published [here](https://medium.com/@_nancychauhan/introduction-to-message-queue-build-a-newsletter-app-using-django-celery-and-rabbitmq-in-30-min-6d484162391d)

Thank you for reading! Share your feedback in the comment box.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>django</category>
      <category>messagequeue</category>
      <category>python</category>
    </item>
    <item>
      <title>Faster Builds with Docker Caching</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Wed, 20 May 2020 06:55:48 +0000</pubDate>
      <link>https://dev.to/_nancychauhan/faster-builds-with-docker-caching-1bc7</link>
      <guid>https://dev.to/_nancychauhan/faster-builds-with-docker-caching-1bc7</guid>
      <description>&lt;p&gt;Recently was working around speeding up CI Build, which took time around &lt;code&gt;50 min&lt;/code&gt;. This post is about speeding up builds with Docker caching and Buildkit.&lt;/p&gt;

&lt;p&gt;It is a little concept, but it is significant to know as this helped me to reduce timings from &lt;code&gt;50 min&lt;/code&gt; to &lt;code&gt;15 min&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Caching
&lt;/h3&gt;

&lt;p&gt;Docker caches each layer as an image is built, and each layer will only be re-built if it or the layer above it has changed since the last build. So, you can significantly speed up builds with Docker cache. &lt;/p&gt;

&lt;p&gt;A better way would be to have multiple COPY instructions that install the packages and dependencies first and then copies your code.&lt;/p&gt;

&lt;p&gt;To avoid invalidating the cache:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start your Dockerfile with commands that are less likely to change&lt;/li&gt;
&lt;li&gt;Place commands that are more likely to change (like COPY . .) as late as possible&lt;/li&gt;
&lt;li&gt;Add only the necessary files (use a .dockerignore file)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example : &lt;/p&gt;

&lt;p&gt;The following Dockerfile is for a simple node js project which works great &lt;br&gt;
but here cache is not used :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mhart/alpine-node

WORKDIR /src
# Copy your code in the docker image
COPY . /src
# Install your project dependencies
RUN npm install
# Expose the port 3000
EXPOSE 3000
# Set the default command to run when a container starts
CMD ["npm", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Optimized Dockerfile :&lt;/p&gt;

&lt;p&gt;Here &lt;code&gt;step 7&lt;/code&gt; is the only step where an image is rebuilt based on &lt;br&gt;
previous steps when we make a change to our source code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mhart/alpine-node:5.6.0
WORKDIR /src
# Expose the port 3000
EXPOSE 3000
# Set the default command to run when a container starts
CMD ["npm", "start"]
# Install app dependencies
COPY package.json /src
RUN npm install
# Copy your code in the docker image
COPY . /src
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  BuildKit
&lt;/h3&gt;

&lt;p&gt;If you're using a Docker version &amp;gt;= 19.03 you can use BuildKit, a container image builder. Without BuildKit, if an image doesn't exist on your local image registry, you would need to pull the remote images before building in order to take advantage of Docker layer caching.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker pull mjhea0/docker-ci-cache:latest

$ docker build --tag mjhea0/docker-ci-cache:latest .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With BuildKit, you don't need to pull the remote images before building since it caches each build layer in your image registry. Then, when you build the image, each layer is downloaded as needed during the build.&lt;/p&gt;

&lt;p&gt;To enable BuildKit, set the DOCKER_BUILDKIT environment variable to 1. Then, to turn on the inline layer caching, use the BUILDKIT_INLINE_CACHE build argument.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export DOCKER_BUILDKIT=1

# Build and cache image
$ docker build --tag mjhea0/docker-ci-cache:latest --build-arg BUILDKIT_INLINE_CACHE=1 .

# Build an image from remote cache
$ docker build --cache-from mjhea0/docker-ci-cache:latest .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Originally &lt;a href="//todayilearnt.xyz/posts/nancy/faster_builds_with_docker_caching"&gt;Posted here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>caching</category>
      <category>buildkit</category>
    </item>
    <item>
      <title>DNS Resolution</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Thu, 07 May 2020 04:18:27 +0000</pubDate>
      <link>https://dev.to/_nancychauhan/dns-resolution-3pbg</link>
      <guid>https://dev.to/_nancychauhan/dns-resolution-3pbg</guid>
      <description>&lt;p&gt;Recently was working around DNS and thought to put it here!&lt;/p&gt;

&lt;p&gt;Computers work with numbers. Computers talk to another computer using a numeric address called IP address. Though structured and thus great for computers, it is tough for humans to remember.&lt;/p&gt;

&lt;p&gt;DNS acts as the phonebook of the internet 🌐. It converts a web address such as "example.com" to an IP address, which computers use to connect. As a result, we don't have to remember complicated IP addresses 🤩.&lt;/p&gt;

&lt;p&gt;We are trying to open example.com on a browser. A Typical DNS lookup goes like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The browser first looks up "example.com" in its DNS cache. If it is present, the browser uses the cached IP address and connects to "example.com". If not, then the browser goes to the next step.&lt;/li&gt;
&lt;li&gt;Browser issues a &lt;code&gt;gethostbyname (3)&lt;/code&gt; and passes the responsibility of name resolution to the operating system (OS). The OS    now becomes the resolver.&lt;/li&gt;
&lt;li&gt;OS looks for the domain name in the system DNS cache. If found then it
returns the IP address to the browser else the OS goes to the next step.&lt;/li&gt;
&lt;li&gt;The OS looks into &lt;code&gt;\etc\hosts&lt;/code&gt;, known as the hosts file. The hosts file is a method of maintaining hostname to IP address mapping from the ARPANET days. If an entry exists, the OS returns the IP address else it goes to the next step.&lt;/li&gt;
&lt;li&gt;The OS tries to connect to your configured DNS Servers and sends a DNS query for "example.com". You can manually set your      DNS Servers, or your connected networks can configure it for you. The DNS server now becomes the resolver and has to return a response to the OS of the machine that has sent the DNS query.&lt;/li&gt;
&lt;li&gt;The DNS server (resolver) looks into its DNS cache for the hostname. If it finds an entry, it returns the same to the calling machine. Else it goes to the next step.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The DNS server tries to connect to root nameserver (.) You can do &lt;code&gt;dig .&lt;/code&gt; to find root nameserver your DNS server is trying to connect. At present, there are 13 root nameservers named with the letters "a" to "m" — &lt;code&gt;a.root-servers.net.&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ dig -t NS .

; &amp;lt;&amp;lt;&amp;gt;&amp;gt; DiG 9.10.6 &amp;lt;&amp;lt;&amp;gt;&amp;gt; -t NS .
;; global options: +cmd
;; Got answer:
;; -&amp;gt;&amp;gt;HEADER&amp;lt;&amp;lt;- opcode: QUERY, status: NOERROR, id: 45206
;; flags: qr rd ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;.              IN  NS

;; ANSWER SECTION:
.           48  IN  NS  a.root-servers.net.
.           48  IN  NS  d.root-servers.net.
.           48  IN  NS  k.root-servers.net.
.           48  IN  NS  g.root-servers.net.
.           48  IN  NS  j.root-servers.net.
.           48  IN  NS  c.root-servers.net.
.           48  IN  NS  b.root-servers.net.
.           48  IN  NS  m.root-servers.net.
.           48  IN  NS  f.root-servers.net.
.           48  IN  NS  h.root-servers.net.
.           48  IN  NS  l.root-servers.net.
.           48  IN  NS  e.root-servers.net.
.           48  IN  NS  i.root-servers.net.

;; Query time: 80 msec
;; SERVER: 10.254.254.210#53(10.254.254.210)
;; WHEN: Wed May 06 22:51:43 IST 2020
;; MSG SIZE  rcvd: 239

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now the DNS server requests on of the above root nameserver for the TLD&lt;br&gt;
level root nameserver for TLD for ".com".&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ dig @d.root-servers.net. -t NS com.

; &amp;lt;&amp;lt;&amp;gt;&amp;gt; DiG 9.10.6 &amp;lt;&amp;lt;&amp;gt;&amp;gt; @d.root-servers.net. -t NS com.
; (1 server found)
;; global options: +cmd
;; Got answer:
;; -&amp;gt;&amp;gt;HEADER&amp;lt;&amp;lt;- opcode: QUERY, status: NOERROR, id: 106
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 27
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1450
;; QUESTION SECTION:
;com.               IN  NS

;; AUTHORITY SECTION:
com.            172800  IN  NS  a.gtld-servers.net.
com.            172800  IN  NS  b.gtld-servers.net.
com.            172800  IN  NS  c.gtld-servers.net.
com.            172800  IN  NS  d.gtld-servers.net.
com.            172800  IN  NS  e.gtld-servers.net.
com.            172800  IN  NS  f.gtld-servers.net.
com.            172800  IN  NS  g.gtld-servers.net.
com.            172800  IN  NS  h.gtld-servers.net.
com.            172800  IN  NS  i.gtld-servers.net.
com.            172800  IN  NS  j.gtld-servers.net.
com.            172800  IN  NS  k.gtld-servers.net.
com.            172800  IN  NS  l.gtld-servers.net.
com.            172800  IN  NS  m.gtld-servers.net.

;; ADDITIONAL SECTION:
a.gtld-servers.net. 172800  IN  A   192.5.6.30
b.gtld-servers.net. 172800  IN  A   192.33.14.30
c.gtld-servers.net. 172800  IN  A   192.26.92.30
d.gtld-servers.net. 172800  IN  A   192.31.80.30
e.gtld-servers.net. 172800  IN  A   192.12.94.30
f.gtld-servers.net. 172800  IN  A   192.35.51.30
g.gtld-servers.net. 172800  IN  A   192.42.93.30
h.gtld-servers.net. 172800  IN  A   192.54.112.30
i.gtld-servers.net. 172800  IN  A   192.43.172.30
j.gtld-servers.net. 172800  IN  A   192.48.79.30
k.gtld-servers.net. 172800  IN  A   192.52.178.30
l.gtld-servers.net. 172800  IN  A   192.41.162.30
m.gtld-servers.net. 172800  IN  A   192.55.83.30
a.gtld-servers.net. 172800  IN  AAAA    2001:503:a83e::2:30
b.gtld-servers.net. 172800  IN  AAAA    2001:503:231d::2:30
c.gtld-servers.net. 172800  IN  AAAA    2001:503:83eb::30
d.gtld-servers.net. 172800  IN  AAAA    2001:500:856e::30
e.gtld-servers.net. 172800  IN  AAAA    2001:502:1ca1::30
f.gtld-servers.net. 172800  IN  AAAA    2001:503:d414::30
g.gtld-servers.net. 172800  IN  AAAA    2001:503:eea3::30
h.gtld-servers.net. 172800  IN  AAAA    2001:502:8cc::30
i.gtld-servers.net. 172800  IN  AAAA    2001:503:39c1::30
j.gtld-servers.net. 172800  IN  AAAA    2001:502:7094::30
k.gtld-servers.net. 172800  IN  AAAA    2001:503:d2d::30
l.gtld-servers.net. 172800  IN  AAAA    2001:500:d937::30
m.gtld-servers.net. 172800  IN  AAAA    2001:501:b1f9::30

;; Query time: 259 msec
;; SERVER: 199.7.91.13#53(199.7.91.13)
;; WHEN: Wed May 06 22:54:16 IST 2020
;; MSG SIZE  rcvd: 828

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;DNS server then requests one of the above root nameservers for the authoritative nameserver for the domain &lt;code&gt;example.com&lt;/code&gt;. This set of nameservers host the addresses of the domain as well as any subdomains it may have.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ dig @a.gtld-servers.net. -t NS example.com

; &amp;lt;&amp;lt;&amp;gt;&amp;gt; DiG 9.10.6 &amp;lt;&amp;lt;&amp;gt;&amp;gt; @a.gtld-servers.net. -t NS example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; -&amp;gt;&amp;gt;HEADER&amp;lt;&amp;lt;- opcode: QUERY, status: NOERROR, id: 1127
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;example.com.           IN  NS

;; AUTHORITY SECTION:
example.com.        172800  IN  NS  a.iana-servers.net.
example.com.        172800  IN  NS  b.iana-servers.net.

;; Query time: 66 msec
;; SERVER: 192.5.6.30#53(192.5.6.30)
;; WHEN: Wed May 06 22:55:10 IST 2020
;; MSG SIZE  rcvd: 88
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The DNS server requests the authoritative nameservers for IP addresses of the domain and returns the result to the system that sent it the DNS query.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ dig @a.iana-servers.net. -t A example.com

; &amp;lt;&amp;lt;&amp;gt;&amp;gt; DiG 9.10.6 &amp;lt;&amp;lt;&amp;gt;&amp;gt; @a.iana-servers.net. -t A example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; -&amp;gt;&amp;gt;HEADER&amp;lt;&amp;lt;- opcode: QUERY, status: NOERROR, id: 5682
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;example.com.           IN  A

;; ANSWER SECTION:
example.com.        86400   IN  A   93.184.216.34

;; Query time: 281 msec
;; SERVER: 199.43.135.53#53(199.43.135.53)
;; WHEN: Wed May 06 22:58:40 IST 2020
;; MSG SIZE  rcvd: 56
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using the IP address &lt;code&gt;93.184.216.34&lt;/code&gt;, the web browser connects to the host.&lt;/p&gt;

&lt;p&gt;Every stage maintains a cache for some number of seconds based on the &lt;code&gt;TTL&lt;/code&gt; that every query returns. In the following DNS query result, the TTL is &lt;code&gt;86400&lt;/code&gt; seconds&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;example.com.        86400   IN  A   93.184.216.34
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A resolver can thus cache the contents of the query for 86400 seconds. This caching helps to speed up the process and reduces the load on DNS servers.&lt;/p&gt;

&lt;p&gt;Originally posted at &lt;a href="https://todayilearnt.xyz/posts/nancy/dns_resolution/"&gt;https://todayilearnt.xyz/posts/nancy/dns_resolution/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dns</category>
      <category>dnsresolution</category>
      <category>networking</category>
    </item>
    <item>
      <title>Monitoring Java Web Apps using Prometheus and Grafana</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Tue, 21 Apr 2020 03:13:15 +0000</pubDate>
      <link>https://dev.to/_nancychauhan/monitoring-java-web-apps-using-prometheus-and-grafana-37ik</link>
      <guid>https://dev.to/_nancychauhan/monitoring-java-web-apps-using-prometheus-and-grafana-37ik</guid>
      <description>&lt;p&gt;Recently, I have been exploring ways to make systems as monitorable as possible, which means minimizing the number of unknown-unknowns!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xvGxo7VO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AdbQt8H-2_SBrqBvB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xvGxo7VO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AdbQt8H-2_SBrqBvB.png" alt="Monitoring"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The four pillars of the Observability Engineering team’s charter are :&lt;/p&gt;

&lt;p&gt;(Source: &lt;a href="https://blog.twitter.com/engineering/en_us/a/2016/observability-at-twitter-technical-overview-part-i.html"&gt;Twitter’s tech blog&lt;/a&gt; )&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring&lt;/li&gt;
&lt;li&gt;Alerting/visualization&lt;/li&gt;
&lt;li&gt;Distributed systems tracing infrastructure&lt;/li&gt;
&lt;li&gt;Log aggregation/analytics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monitoring an application’s health and metrics makes it possible to manage it in a better way and notice unoptimized behavior. I will be giving you a walkthrough on monitoring and visualizing metrics of a Java application in this blog.&lt;br&gt;
We will be using the following tools to achieve this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://micrometer.io/"&gt;Micrometer&lt;/a&gt;: Exposes the metrics from our application&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;: Stores our metric data&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt;: Visualizes our data in graphs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s straightforward to implement all of them with just a few lines of code. We will be performing this on my project, which is a simple KV (key-value) store web service that has been developed in Java. You can find the code &lt;a href="https://github.com/Nancy-Chauhan/keystore"&gt;here&lt;/a&gt;.&lt;br&gt;
To make things even easier, we’ll be using Docker to run Prometheus and Grafana. Later we will provision Grafana Data Sources and Dashboards from the configuration. Let’s get started!&lt;/p&gt;
&lt;h2&gt;
  
  
  Configuring Java application with Micrometer
&lt;/h2&gt;

&lt;p&gt;Adding Prometheus support to any Java application becomes a lot easier with Micrometer. It provides a clean facade to many monitoring platforms, including Prometheus.&lt;/p&gt;
&lt;h3&gt;
  
  
  Installing
&lt;/h3&gt;

&lt;p&gt;We need to add the following dependency :&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In Gradle:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;compile 'io.micrometer:micrometer-registry-prometheus:latest.release'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Configuring
&lt;/h3&gt;

&lt;p&gt;In Micrometer, we need a “Meter,” which is the interface for collecting a set of measurements (which we individually call metrics) about our application.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;final PrometheusMeterRegistry prometheusRegistry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Micrometer packs several &lt;code&gt;Meter&lt;/code&gt; primitives including: &lt;code&gt;Timer&lt;/code&gt;, &lt;code&gt;Counter&lt;/code&gt;, &lt;code&gt;Gauge&lt;/code&gt;, &lt;code&gt;DistributionSummary&lt;/code&gt;, &lt;code&gt;LongTaskTimer&lt;/code&gt;, &lt;code&gt;FunctionCounter&lt;/code&gt;, &lt;code&gt;FunctionTimerand&lt;/code&gt; and &lt;code&gt;TimeGauge&lt;/code&gt;. We will be modifying our code to report various metrics using the above set of meters. You can read more about them from &lt;a href="https://micrometer.io/docs/concepts"&gt;the official documentation&lt;/a&gt;.&lt;br&gt;
For instance, the following code defines a counter that can be used to count some events over a short window. Here this counter is used to count the number of getAll requests.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Counter getAllRequestCounter= prometheusRegistry.counter("http.request",
        "uri", "/keyvalue",
        "operation", "getAll");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Secondly, and most importantly, Prometheus expects to scrape app instances for metrics. In addition to creating a Prometheus registry, we will also need to expose an HTTP endpoint to Prometheus’ scraper.&lt;br&gt;
get("/metrics", (request, response) -&amp;gt; prometheusRegistry.scrape());&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: The above configuration is for Spark Framework in Java. In a Spring environment, a Prometheus actuator endpoint is autoconfigured in the presence of the Spring Boot Actuator.&lt;/em&gt;&lt;br&gt;
To record an event, we call the increment method on the counter that we just created :&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;get("/keyvalue", (request, response) -&amp;gt; {
    getAllRequestCounter.increment();
    response.type("application/json");return new Gson().toJson(keyValueStoreService.getAll());
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After we are done with configuring our code, let’s proceed towards setting up graphs.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting up Prometheus and Grafana
&lt;/h2&gt;

&lt;p&gt;Prometheus is a time-series database that stores our metric data by pulling it using a built-in data scraper periodically over HTTP. It also has a simple user interface where we can query and visualize the collected metrics.&lt;/p&gt;

&lt;p&gt;While Prometheus provides some basic visualization, Grafana offers a rich UI where you can build custom graphs quickly and create a dashboard out of many graphs in no time. Grafana can pull data from various data sources like Prometheus, Elasticsearch, InfluxDB, etc.&lt;br&gt;
Here we build docker-compose.yml to install Prometheus and Grafana from Docker:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Let’s now configure Prometheus by setting the scrape interval, the targets, and define the endpoints. To do that, we’ll be using the prometheus.yml file:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You can read more about Prometheus configurations, at &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/"&gt;the official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bringing up everything
&lt;/h2&gt;

&lt;p&gt;Run &lt;code&gt;docker-compose up&lt;/code&gt; to start the app, Prometheus and Grafana. Open &lt;a href="http://localhost:9090"&gt;http://localhost:9090&lt;/a&gt; for Prometheus and &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt; for Grafana.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zfqa4Q7d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AMMoAFpMmqr0avISJ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zfqa4Q7d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AMMoAFpMmqr0avISJ.png" alt="Prometheus"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check if Prometheus is pulling metrics from the web app, open “Status”&amp;gt; “Targets.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VBaP5luK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AfD8WxOJPzfWmxrhE.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VBaP5luK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AfD8WxOJPzfWmxrhE.png" alt="Target"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Grafana
&lt;/h2&gt;

&lt;p&gt;While docker-compose started Grafana, it doesn’t do much yet.&lt;/p&gt;

&lt;p&gt;We need to configure Grafana to connect with Prometheus by manually setting up the data source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g28KGCdx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AphvEr1EeAd2CBt2Y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g28KGCdx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AphvEr1EeAd2CBt2Y.png" alt="Grafan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then create a dashboard, add a “Query” and select your Prometheus data source, which you just configured.&lt;/p&gt;

&lt;p&gt;In the “Metrics” field, add a PromQL query such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http_request_total{application="KeyValue",instance="app:4567",job="key-value",operation="getAll",uri="/keyvalue"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pbEJnRjX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A7R0iKQ40zSS-XoMx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pbEJnRjX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A7R0iKQ40zSS-XoMx.png" alt="grafan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can read more about PromQL in &lt;a href="https://prometheus.io/docs/prometheus/latest/querying/basics/"&gt;the official documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Configure Grafana Provisioning
&lt;/h2&gt;

&lt;p&gt;Instead of manually creating dashboards and data sources, we can utilize Grafana provisioning. You can read more about Grafana Provisioning in the official documentation.&lt;br&gt;
Add two new volumes to docker-compose to read our provisioning configs and dashboards. You can see the completed docker-compose file here.&lt;br&gt;
volumes:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - "./grafana/provisioning:/etc/grafana/provisioning"
  - "./grafana/dashboards:/var/lib/grafana/dashboards"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Create a new file, &lt;code&gt;datasource.yml&lt;/code&gt; under &lt;code&gt;provisioning/datasources&lt;/code&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Create another file, &lt;code&gt;dashboard.yml&lt;/code&gt; under &lt;code&gt;provisioning/dashboards&lt;/code&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Export the dashboard you created earlier and put it under folder. You can find the dashboard &lt;a href="https://github.com/Nancy-Chauhan/keystore/blob/master/grafana/dashboards/KeyValue.json"&gt;here as well&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Bring docker-compose up, and you should be able to see Grafana with your dashboard and data source set.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--09g6omLR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AwbzKtK9XQkX-_fre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--09g6omLR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AwbzKtK9XQkX-_fre.png" alt="graph"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we saw how to configure a Java application to monitor it with Prometheus. You can explore JMX exporter and Micrometer JVM extras to report several metrics about the JVM and many other java libraries.&lt;br&gt;
Let me know about your experiences in the comments!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Today I Learnt: Git Submodules</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Tue, 21 Apr 2020 01:44:03 +0000</pubDate>
      <link>https://dev.to/_nancychauhan/today-i-learnt-git-submodules-iil</link>
      <guid>https://dev.to/_nancychauhan/today-i-learnt-git-submodules-iil</guid>
      <description>&lt;p&gt;It often happens that while working on one project, you need to use another project from within it. For that, we can make use of git submodules.&lt;/p&gt;

&lt;p&gt;Submodules are a reference to another repository withing a parent repository. You can make a submodule point to any arbitrary revision of the other repo.&lt;/p&gt;

&lt;p&gt;Today I learned how messy git submodules could be! 😾&lt;/p&gt;

&lt;p&gt;Using submodules needs many disciplines. Every developer using repo always need to update submodules using git submodule update --init --recursive. Failing to update can lead to very hard to debug and silent errors. And that's not all if you forget to update the submodules, then git commits the old version of the submodule. This effectively undoes the submodule update done by another developer. And git log doesn't even show such an update 😓.&lt;/p&gt;

&lt;p&gt;Submodules need references to the URL that git should fetch. We all use SSH authentication in git. To make submodules experience frictionless, we point submodules using another SSH URL. When you start consuming this repo in CI such as Jenkins, which uses https to clone, everything breaks 😢 , so either you force your CI to clone ssh and set up its authentication or force your ci to pull everything in https, which is not possible in all scenarios. Now we are left with a decision to choose between frictionless CI or frictionless development experience. 😕&lt;/p&gt;

&lt;p&gt;Also, there are many stories about submodules getting lost during merge conflicts, submodules not supported by some UI tools, etc. which makes them pure evil.&lt;/p&gt;

&lt;p&gt;Some people say submodules are "Sobmodules." 🤣&lt;/p&gt;

&lt;p&gt;Originally Posted at &lt;a href="https://todayilearnt.xyz/posts/nancy/git_submodules/"&gt;https://todayilearnt.xyz/posts/nancy/git_submodules/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
    </item>
    <item>
      <title>Docker on mac vs Linux</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Tue, 21 Apr 2020 01:41:10 +0000</pubDate>
      <link>https://dev.to/_nancychauhan/docker-on-mac-vs-linux-l9i</link>
      <guid>https://dev.to/_nancychauhan/docker-on-mac-vs-linux-l9i</guid>
      <description>&lt;p&gt;🐳Docker is different on Mac and Linux systems. Docker directly leverages the kernel of the host system on Linux. On the other hand, Mac does not provide a Linux kernel, so Docker runs on a small Linux VM running on a mac. Due to this, there are many differences. &lt;/p&gt;

&lt;h3&gt;
  
  
  Cannot access container IPs directly
&lt;/h3&gt;

&lt;p&gt;On Linux let's try to inspect a running docker image :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@primary:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS               NAMES
a230855dd7d2        httpd               "httpd-foreground"   17 seconds ago      Up 16 seconds       80/tcp              determined_saha
ubuntu@primary:~$ sudo docker inspect determined_saha

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's curl the IP :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                   "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]
ubuntu@primary:~$ curl 172.17.0.2
&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However on mac when hitting the IP address of the container we get :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ curl 172.17.0.3
curl: (7) Failed to connect to 172.17.0.3 port 80: Operation timed out

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Curl times out because the container is running inside the VM and not sharing the network with the mac.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
