<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Paritosh</title>
    <description>The latest articles on DEV Community by Paritosh (@paritoshanand).</description>
    <link>https://dev.to/paritoshanand</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/paritoshanand"/>
    <language>en</language>
    <item>
      <title>DockerHub RateLimiting; Solving via Nexus OSS</title>
      <dc:creator>Paritosh</dc:creator>
      <pubDate>Sun, 16 Jan 2022 17:41:35 +0000</pubDate>
      <link>https://dev.to/paritoshanand/dockerhub-ratelimiting-solving-via-nexus-oss-84</link>
      <guid>https://dev.to/paritoshanand/dockerhub-ratelimiting-solving-via-nexus-oss-84</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7OcNpXCV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orymku6345ezecnyqflw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7OcNpXCV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orymku6345ezecnyqflw.png" alt="Image description" width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is this about?
&lt;/h2&gt;

&lt;p&gt;Docker is continuously moving towards commercialising their services, which is in a way correct thing to do to keep the ship sailing. One such change they brought in November 2020 is to rate limit the API calls being made to Docker Hub for Docker image pulls.&lt;br&gt;
How we got impacted?&lt;br&gt;
At our organisation, we run our CI system at fairly large scale, where there are thousands of developers actively consume the central CI system for their code commits. All the repos are Docker compatible hence most of them have rely on DockerHub for official images like centos, java, node, python, etc.&lt;/p&gt;

&lt;p&gt;So, when this rate limiting was introduced it intermittently impacted our CI system where builds started failing with exceptions from Docker Hub like below :-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR: toomanyrequests: Too Many Requests.
You have reached your pull rate limit. 
You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits. 
You must authenticate your pull requests.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Solution: Sonatype Nexus — Docker Registry
&lt;/h2&gt;

&lt;p&gt;In my opinion Sonatype Nexus is an amazing, must have tool in a large scale CI system. We started using Nexus for proxying Java, NodeJS and also hosting private packages too.&lt;br&gt;
So it was an obvious choice to use Nexus for solving DockerHub rate limiting issue. By proxying DockerHub and reducing dependency on DockerHub.&lt;/p&gt;

&lt;p&gt;Docker Hub is the common registry used by all image creators and consumers. To reduce duplicate downloads and improve download speeds for your developers and CI servers, you should proxy Docker Hub and any other registry you use for Docker images.&lt;/p&gt;

&lt;p&gt;We created a proxy repo in Nexus for DockerHub and added a small configuration to mirror the registry in our CI servers (which are autoscaled, so it was a change in the AMI).&lt;/p&gt;
&lt;h2&gt;
  
  
  How it works?
&lt;/h2&gt;

&lt;p&gt;The first time you request an image from your local registry mirror, it pulls the image from the public Docker registry and stores it in Nexus before handing it back to the CI servers. On subsequent requests, the Nexus registry mirror is able to serve the image from its own storage.&lt;br&gt;
/etc/docker/daemon.json&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "registry-mirrors": ["http://nexus.domain.org:18000"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simple and effective !&lt;/p&gt;

&lt;p&gt;For detailed information on configuration do check the official documentation of Nexus here &lt;a href="https://help.sonatype.com/repomanager3/formats/docker-registry/proxy-repository-for-docker"&gt;https://help.sonatype.com/repomanager3/formats/docker-registry/proxy-repository-for-docker&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking for More…!!
&lt;/h2&gt;

&lt;p&gt;Pls do follow. I am planning to post more such blogs which are practical and scalable solutions around Docker, AWS, Python.&lt;br&gt;
In case any help is required for setup or approach pls don’t think twice, before asking me. &lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>ci</category>
      <category>devops</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Deploying Django Apps; Containers</title>
      <dc:creator>Paritosh</dc:creator>
      <pubDate>Sun, 22 Oct 2017 14:58:30 +0000</pubDate>
      <link>https://dev.to/paritoshanand/deploying-django-apps-containers-4cj</link>
      <guid>https://dev.to/paritoshanand/deploying-django-apps-containers-4cj</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;Lately I observed some weird practices that my fellow Engineers follow for developing and especially deploying their Django application in production.&lt;/p&gt;

&lt;p&gt;I did bit of research about best practices for a Django project development life cycle &amp;amp; below is what I feel the most reliable way to develop &amp;amp; deploy a Django application end-to-end.&lt;/p&gt;

&lt;p&gt;I don't intend to define the coding practices...&lt;/p&gt;

&lt;p&gt;Though trying to set the perspective of creating new project &amp;amp; shed some light to small details that we tend to miss that eventually lead to bad Engineering.&lt;/p&gt;

&lt;h4&gt;
  
  
  Unit test cases in Django
&lt;/h4&gt;

&lt;p&gt;Writing test cases is a tool that developer do ignore or don't plan ahead and hence there are repeated iterations to make sure that application functionality is working fine after every major code change.&lt;/p&gt;

&lt;p&gt;Unit tests are important if the application's scope is wide. Though this practice is being followed for larger application but overlooked for small or medium scale projects.&lt;/p&gt;

&lt;h4&gt;
  
  
  Code style enforcement using &lt;a href="http://flake8.pycqa.org/en/latest/"&gt;Flake8&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Flake8 points out the errors &amp;amp; violations like a variable declared but never used in a class. It is very handy and helps in setting fundamental guidelines for every developer in the team.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating Docker images for Django application
&lt;/h4&gt;

&lt;p&gt;Containerisation has huge benefits &amp;amp; being the torch bearer of Docker in my organisation I have figured that the best possible solution to deploy Django&lt;br&gt;
applications is to create Docker images &amp;amp; ship them to production.&lt;/p&gt;

&lt;p&gt;Creating a Dockerfile that define setting up python environment required by the Django application. Using this for creating Docker images is the most reliable way to ship &amp;amp; deploy Django application in production.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setting Docker Registry using &lt;a href="http://vmware.github.io/harbor/"&gt;Harbor&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;In the process to get Docker adopted in my team, it was necessary to get an in-house docker registry which people can use to push the Docker images of the projects they were working on.&lt;/p&gt;

&lt;p&gt;VMware's Harbor is an enterprise grade registry server which supports LDAP based user login to Docker registry. That is Engineers can use their existing user credentials to use in-house Docker registry.&lt;/p&gt;

&lt;h4&gt;
  
  
  Code review &amp;amp; Jenkins CI for a Django project
&lt;/h4&gt;

&lt;p&gt;Code review process using conventional tools like Gerrit &amp;amp; Jenkins offer good features to automate code verification and Docker image creation part using above mentioned strategy.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the most reliable way to develop, ship &amp;amp; deploy applications as per my best knowledge of Django. Do share if there are any other better strategies or tools to achieve this common goal.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>django</category>
    </item>
    <item>
      <title>LB Console for BigIp F5</title>
      <dc:creator>Paritosh</dc:creator>
      <pubDate>Sat, 14 Oct 2017 17:15:24 +0000</pubDate>
      <link>https://dev.to/paritoshanand/lb-console-for-bigip-f5-d1k</link>
      <guid>https://dev.to/paritoshanand/lb-console-for-bigip-f5-d1k</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;LB Console is an application that wraps functionalities provided by BigIp F5 UI to configure load balancer. BigIP F5 load balancer is one of the extensively used load balancer (at least in e-commerce domain). &lt;/p&gt;

&lt;h2&gt;
  
  
  Problem statement
&lt;/h2&gt;

&lt;p&gt;Though BigIp F5 has very effective UI but in terms of usability users are required to have some understanding of how F5 works.&lt;/p&gt;

&lt;p&gt;Things get much complex for teams who want to integrate monitoring application, deployment application &amp;amp; other planned activities. As there is a learning curve required to understand basic working of F5 plus the API integration at multiple levels feels like an overhead for the consumer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;An application that performs all required transactions with BigIp F5. That exposes REST endpoints that can be used for integration with other applications. &lt;/p&gt;

&lt;p&gt;Have a simpler UI that is user friendly &amp;amp; fast. A tracking mechanism that keeps log of all critical transaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python Django framework&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mcauthorn/pycontrol"&gt;PyControl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ElasticSearch&lt;/li&gt;
&lt;li&gt;Python Boto3&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  RESTful approach
&lt;/h4&gt;

&lt;p&gt;Implementing a simple RESTful approach proved to be extremely useful as it enabled teams to easily integrate the same with deployment strategies and moving application from data centres to cloud (AWS &amp;amp; OpenStack).&lt;/p&gt;

&lt;h4&gt;
  
  
  Django DB Caching
&lt;/h4&gt;

&lt;p&gt;As the volume of pools and servers configured at F5 were moderatley large ~3k servers under ~1K pools, read calls from LB Console to BigIp F5 were considerable. Hence to minimise # of calls to F5 Django DB based caching came in handy. &lt;/p&gt;

&lt;p&gt;This enabled the application to query directly to F5 only when the required data was not present in the cache. Result from every F5 query being saved in cache with a short TTL (300s) ensured that data being served almost instantly &amp;amp; F5 query calls only in the event of cache miss occurrence.&lt;/p&gt;

&lt;h4&gt;
  
  
  ElasticSearch based logging
&lt;/h4&gt;

&lt;p&gt;ElasticSearch based logging each of the create/update/delete transaction helped in keeping critical data readily available for consumers. User interface being the primary consumer to display logs per server or pool, later the same was integrated with our IRC channel for users to query the logs from IRC itself.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker
&lt;/h4&gt;

&lt;p&gt;Conventionally Django projects were shipped as tar files in production or in some cases deployment via Git was practiced. So to bring some reliablility and standardisation in deployment approach Docker came very handy plus usual benefits of using containerisation. &lt;/p&gt;

&lt;p&gt;LB Console application also solves complex problems like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traffic switching within data centres.&lt;/li&gt;
&lt;li&gt;Managing DNS entries in AWS Route53.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>django</category>
      <category>loadbalancer</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
