<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arif Amirani</title>
    <description>The latest articles on DEV Community by Arif Amirani (@arifamirani).</description>
    <link>https://dev.to/arifamirani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arifamirani"/>
    <language>en</language>
    <item>
      <title>A JSON Based Serverless Quasi-Static Platform</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Tue, 10 Aug 2021 13:16:28 +0000</pubDate>
      <link>https://dev.to/arifamirani/a-json-based-serverless-quasi-static-platform-ha5</link>
      <guid>https://dev.to/arifamirani/a-json-based-serverless-quasi-static-platform-ha5</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I've been working with large NGOs to architect their multi-faceted systems. These systems are responsible for information dissemination, data collection, analysis, and sources &amp;amp; sinks to other systems. Our near term goal was to build an information platform (IP). The MDP was narrowed down to the following feature set.&lt;/p&gt;

&lt;h3&gt;
  
  
  Target persona
&lt;/h3&gt;

&lt;p&gt;The IP was intended for 3 main personas.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Villagers

&lt;ul&gt;
&lt;li&gt;Direct consumers&lt;/li&gt;
&lt;li&gt;Low tech capability&lt;/li&gt;
&lt;li&gt;Language barriers&lt;/li&gt;
&lt;li&gt;Varied devices - sizes and capabilities&lt;/li&gt;
&lt;li&gt;Irregular bandwidth&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Ops team

&lt;ul&gt;
&lt;li&gt;Responsible for training using the content&lt;/li&gt;
&lt;li&gt;Submitting feedback for content from users&lt;/li&gt;
&lt;li&gt;Updates to content (limited)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Content team

&lt;ul&gt;
&lt;li&gt;Primarily responsible for content&lt;/li&gt;
&lt;li&gt;Regular content updates&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Media support
&lt;/h4&gt;

&lt;p&gt;The portal has to support different media types which includes but not limited to; video, digital books, images, audio (podcast). The content is generated by a marketing and education team which is then uploaded to a public repository for downloads. All content is in the public domain.&lt;/p&gt;

&lt;h4&gt;
  
  
  Multilingual and region support
&lt;/h4&gt;

&lt;p&gt;The content itself is versatile. The information and instructions change based on the local language, diet, and availability of resources. The portal has to support reuse of content as well as specific content for a particular region. Ease of management of the portal data by the content team was paramount.&lt;/p&gt;

&lt;h4&gt;
  
  
  Interval based updates
&lt;/h4&gt;

&lt;p&gt;The content team updates the data several times a day. However there was no need to do real-time updates. New content can show up within the hour.&lt;/p&gt;

&lt;h4&gt;
  
  
  Analytics
&lt;/h4&gt;

&lt;p&gt;Measurement is core to any successful deployment, especially for large and diverse ones. We factored in the need of granular measurement for clicks, bounces, playback, skips right from day one.&lt;/p&gt;

&lt;h4&gt;
  
  
  3rd Party API
&lt;/h4&gt;

&lt;p&gt;The content for the portal has to be made available to 3rd party applications for their internal consumption. The interesting part here is that the content over the API is the same as the website, however the 3rd party applications must be throttled to avoid proxy or overuse of our endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Architecture
&lt;/h2&gt;

&lt;p&gt;Apart from the requirements above, we also were tasked to ensure cost-effectiveness along with speed of delivery. The obvious choice was a typical three tier architecture that would achieve most of the objectives. The team also had the right experience. However I decided to go a different route. In the recent past I had deployed a JSON based architecture that had scaled well with up to 20 million visitors but nowhere close to the complexity.&lt;/p&gt;

&lt;p&gt;I made a few changes and architected the solution loosely on CQRS - Command Query Responsibility Segregation pattern. All the data that was needed to be displayed (read path) were based on JSON files that were continously refreshed via a fleet of Lambda functions. On the other hand the write path were served by API Gateway (HTTP).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8mucfqifjruu9x1g9c3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8mucfqifjruu9x1g9c3.png" alt="Architecture Overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Origin Database
&lt;/h3&gt;

&lt;p&gt;These are the primary sources of information. They can be a Google Sheet, Airtable, RDBMS data sources. They provide the actual content metadata and rules of transformation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pull Lambda Fleet
&lt;/h3&gt;

&lt;p&gt;PLF run either on a reactive or schedule basis. They fetch data from the data sources and merge them based on rules. The output is split based on language, region, content, and use case. They generate JSON files at the end of the process and upload them to an S3 bucket using a convention that the webapp can understand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Information Portal (IP)
&lt;/h3&gt;

&lt;p&gt;Built on React and Tailwind CSS, the IP delivers the content to users. It is a lightweight, responsive PWA. Can work with limited bandwidth and works across all device display sizes. Once it loads, it pulls the appropriate JSON from the server via CloudFront, depending on the region and language setting. All UI actions such as filtering, and search are done on the client side. The size of each JSON is manageable and uses aggressive compression to deliver data quickly. The auto reload mechanism in React (ReactSWR), ensures that the client refetches the JSON every 15 mins or on reload of page.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Clients
&lt;/h3&gt;

&lt;p&gt;For the content 3rd party API, we use the main API Gateway (APIGW). Using the APIGW method, we can connect directly the GET method to an S3 resource. This requires no glue code or handling. It is seamlessly handled by APIGW.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fli4f2zq2zkgl0nqp4q3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fli4f2zq2zkgl0nqp4q3x.png" alt="API Gateway S3 Connect"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using API Keys &amp;amp; Usage Plans, we ensured only authenticated clients got access to data and also rate limited their API calls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf3lkpbxsbmiuv21eerj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf3lkpbxsbmiuv21eerj.png" alt="API Gateway Rate Limit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Analytics &amp;amp; Read
&lt;/h3&gt;

&lt;p&gt;All of the write paths use the main API Gateway (APIGW) and Lambda functions to write to DynamoDB. We capture all granular events such as playback, download, visit, and batch them up to the server. Partial data loss is ok with the volume we expected.&lt;/p&gt;

&lt;p&gt;We created an internal portal to create a feedback loop of usage which was based on the data in DynamoDB. The web app used the HTTP API endpoints within the same APIGW to fetch data from DynamoDB.&lt;/p&gt;

&lt;h3&gt;
  
  
  Aggregator Lambda
&lt;/h3&gt;

&lt;p&gt;To monitor and analyse the content changes, we deployed a cron based lambda functions that pulled the current JSON in the data bucket and created a snapshot of it. The snapshots were aggregated over a time interval and uploaded to reporting S3 bucket. A webapp fetched the latest aggregate data and charted it for the admin to review.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results and Conclusion
&lt;/h2&gt;

&lt;p&gt;The entire setup took about 1.5 months to build. Once deployed, we never breached the free tier of many services for processing. Our biggest cost center was data transfer. We had also enabled all edge nodes for Cloudfront which cost us a bit more. The latency issues were non existent. Our DR was pretty self solved due to S3 becoming our primary data store and DynamoDB was highly available but not in the critical path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final notes
&lt;/h2&gt;

&lt;p&gt;We had a specific use case that was served quite well with our design choices. This may not be the most optimal architecture for more demanding and real-time applications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>serverless</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How I passed the AWS Certified Solutions Architect Professional 2021 Exam - SAP-C01</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Mon, 09 Aug 2021 10:39:01 +0000</pubDate>
      <link>https://dev.to/arifamirani/how-i-passed-the-aws-certified-solutions-architect-professional-2021-exam-sap-c01-4f6l</link>
      <guid>https://dev.to/arifamirani/how-i-passed-the-aws-certified-solutions-architect-professional-2021-exam-sap-c01-4f6l</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I recently passed my AWS Solutions Architect Professional Certification on the first attempt. I wanted to share my experience of preparing for the exam as well as the actual exam day tips and tricks I learnt. I'll also talk about what to expect and approaches to navigate the challenges. &lt;/p&gt;

&lt;p&gt;Getting an AWS certification is important for your cloud ecosystem career. Any role including DevOps, Data Engineer, Software Engineer, Architect would immensely benefit from this certification. This certificate demonstrates your ability to analyse and evaluate architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  The certificate
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzb97dvh5ysejc448fdcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzb97dvh5ysejc448fdcz.png" alt="AWS Certified Solutions Architect Professional Badge"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.credly.com/badges/d08284c3-5837-40de-9b8a-0894934e6e2a/public_url" rel="noopener noreferrer"&gt;Credly Page&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;I've been a full stack developer for 15 years. Worked in small and large organizations across domains. I started my cloud journey with AWS, working on the very basic services such as EC2, VPC, ELB, R53, S3; 8 years ago. My primary focus has always been on AWS with 20% time spent on evaluating and working with cloud vendors like Google Cloud Platform (GCP). I've built large scale, distributed, real-time systems in various domains and seen them through growth and chaos.&lt;/p&gt;

&lt;p&gt;I passed my AWS DevOps certification a few years ago. I am an AWS Community Builder, which is an amazing platform for AWS enthusiasts. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Community Builders&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you want to learn about AWS and contribute with brilliant minds from across the globe, you should definitely signup at &lt;a href="https://aws.amazon.com/developer/community/community-builders/" rel="noopener noreferrer"&gt;AWS Community Builders&lt;/a&gt;. With &lt;a href="https://twitter.com/jasondunn" rel="noopener noreferrer"&gt;Jason Dunn&lt;/a&gt; steering the effort it has become engaging and vibrant.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does the exam test you on
&lt;/h2&gt;

&lt;p&gt;The AWS SA Pro is one of the toughest and sought after certifications in the cloud ecosystem. The primary reason is, the certification not only tests your knowledge but more importantly your experience. &lt;/p&gt;

&lt;p&gt;AWS tries very hard to ensure that candidates &lt;em&gt;DO NOT&lt;/em&gt; clear this certification by sheer repetition or memorization. Unless you can demonstrate experience and the ability to evaluate situations this is a hard exam to pass.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Does that I mean I cannot pass without experience?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Of course you can, but according to me, cramming for a professional certification and passing it is a waste of your time. You're better off studying the associate certification. Follow that up with real world experience to get to the professional level. You and your peers will appreciate it more.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Understanding the exams
&lt;/h2&gt;

&lt;p&gt;To clear the exam, it is important to understand the mechanics. They will help you be aware of what AWS expects out of each question. AWS goes into painstaking details to make the exam fair, clear and reliable. Their focus is to test your knowledge and experience at every step.&lt;/p&gt;

&lt;p&gt;My notes on the research:&lt;/p&gt;

&lt;h3&gt;
  
  
  Questions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;There are no trick questions. No one is trying to confuse you&lt;/li&gt;
&lt;li&gt;Several options (keys) are distractors. Distractors are answers which &lt;em&gt;may be&lt;/em&gt; wrong but seem perfectly plausible&lt;/li&gt;
&lt;li&gt;Very rarely will previous questions/answers aid your next ones. Stop wasting time to find clues&lt;/li&gt;
&lt;li&gt;You get about 2.5 minutes per question. Each question is verbose and takes time to read. Learn to skim over (more on this later)&lt;/li&gt;
&lt;li&gt;Many answers are right, you have to pick the &lt;strong&gt;most&lt;/strong&gt; appropriate one given the requirement&lt;/li&gt;
&lt;li&gt;At the Professional level, the questions require you to &lt;strong&gt;Analyze and Evaluate&lt;/strong&gt;. This means you need to understand the scenario and pick one based on several factors. Even the most obvious and straightforward answer will be wrong because it is not cost-effective&lt;/li&gt;
&lt;li&gt;Questions will only test you on one area at a time even with multiple systems&lt;/li&gt;
&lt;li&gt;Do not make assumptions about the scenario. If it isn't written, then you have to ignore that factor&lt;/li&gt;
&lt;li&gt;Questions will not test you on UI, numbers, math, etc. You do however need to know the limits of services to select the right answer. For e.g. API Gateway has a timeout of 29 secs and an answer that uses APIGW but runs for 35 secs will be wrong&lt;/li&gt;
&lt;li&gt;Scores are scaled. Each question does not have the same weight. This will impact your total score.&lt;/li&gt;
&lt;li&gt;There is no penalty for wrong answers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My Preparation
&lt;/h2&gt;

&lt;p&gt;I spent over two weeks of rigorous study time prior to the exam. I continously work on projects that require me to interact with AWS (most common services), this helped. Some of the questions are &lt;em&gt;recall level&lt;/em&gt;, which means they test your ability to remember services and their characteristics, and continously working with AWS will help you answer them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Give yourself at least 1 month to prepare even if you are an AWS expert&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Focus
&lt;/h3&gt;

&lt;p&gt;Like any certification exam, passing the SA Pro, requires a bit of everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge - You absolutely must be well versed with AWS and all the services it has to offer&lt;/li&gt;
&lt;li&gt;Experience - You must know how to solve problems using a combination of AWS services and have solved at least a few in the past&lt;/li&gt;
&lt;li&gt;Scope - Understand what the certification will test you on, before you start studying. Go to at least more than one source to get latest information on what to cover&lt;/li&gt;
&lt;li&gt;Skill - The questions are long, designed to test your focus and need time to read and comprehend. Picking the right approach is key to solving all questions. Come up with your own strategy using the practice exams&lt;/li&gt;
&lt;li&gt;Time Management - Learn to manage your time. I cannot stress enough how easy it is to run out of time. This happens primarily because we cannot let go of a question&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many courses and prep tools out there for the professional exam. I did a cursory search on the internet and asked experts around for the recommended ones. You should do your own research and pick a few.&lt;/p&gt;

&lt;p&gt;The two I ended up using were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AWS Certified Solutions Architect Professional Practice Exams 2021 by Tutorials Dojo&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqflmerdgfy1bw9tq8btg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqflmerdgfy1bw9tq8btg.png" alt="AWS Certified Solutions Architect Professional Practice Exams 2021 by Tutorials Dojo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These exams are the holy grail of practice. They are continously updated and Jon Bonso (a fellow AWS CB) is looped into the AWS ecosystem. The team keeps the questions updated and very relevant to the exam. If there is one service you pay for to pass the exam, this is it.&lt;/p&gt;

&lt;p&gt;Jon and his team are great and will help you every step of the way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to use Tutorials Dojo&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do these exams at the end of your learning schedule&lt;/li&gt;
&lt;li&gt;You only get 4 or 5 exams. Do not waste them by taking them unprepared&lt;/li&gt;
&lt;li&gt;Go thru your study material and then follow up with the exam&lt;/li&gt;
&lt;li&gt;Prepare yourself for 3 hours each and take one exam at a time in its entirety. If you try and take the exam in breaks you will lose a significant advantage&lt;/li&gt;
&lt;li&gt;Do this as close to the exam date as your schedule allows. Doing them close to your exam date will keep the rigour&lt;/li&gt;
&lt;li&gt;Use the review mode repeatedly. Even if you answer the question correctly, go over the others and read why they are wrong. The reasoning reinforces your choices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://portal.tutorialsdojo.com/courses/aws-certified-solutions-architect-professional-practice-exams/" rel="noopener noreferrer"&gt;Click here to visit Tutorials Dojo&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Ultimate AWS Certified Solutions Architect Professional 2021 by Stephane Maarek (Udemy)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ex57flozj85jofdu0n6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ex57flozj85jofdu0n6.png" alt="Ultimate AWS Certified Solutions Architect Professional 2021 by Stephane Maarek (Udemy)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Stephane Maarek is one of the best fast paced trainers out there for AWS certifications. If you have significant experience with AWS, then his courses will walk you through the exam oriented aspects of the services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to use Stephane's course&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is a slides only course. No UI or demos&lt;/li&gt;
&lt;li&gt;Follow up with self guided practical sessions to understand the details which Stephane skips over&lt;/li&gt;
&lt;li&gt;His course runs about 500+ slides. Go thru all of them&lt;/li&gt;
&lt;li&gt;Review the slides before the exams. It'll take you up to 2 days to review. Plan your schedule&lt;/li&gt;
&lt;li&gt;This is a long course, playback at 1.25x or 1.5x to save time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://click.linksynergy.com/link?id=nrzuVB4kH54&amp;amp;offerid=507388.2789348&amp;amp;type=2&amp;amp;murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Faws-solutions-architect-professional%2F" rel="noopener noreferrer"&gt;Visit Stephane Maarek's course on Udemy (Aff Link)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.udemy.com/course/aws-solutions-architect-professional/" rel="noopener noreferrer"&gt;Visit Stephane Maarek's course on Udemy (Direct Link)&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Study notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Go through all the services whether you have used them or not&lt;/li&gt;
&lt;li&gt;Write down characteristics and limitations of every service (e.g. EBS is 3IOPS/GB, Lambda can run for max 15 mins, Lambda vCPU increases with RAM)&lt;/li&gt;
&lt;li&gt;Map services to keywords. e.g. NLB = expensive, fast, millions of connections, SQS = decouple services. Questions revolve around these keywords and if you use process of elimination as a last resort, these keywords will see you through&lt;/li&gt;
&lt;li&gt;Take notes, lots and lots of notes. I filled an entire book with simple one liner notes as the course was going on&lt;/li&gt;
&lt;li&gt;There are many services with similar names but different purposes. Clearly write their attributes to distinguish e.g. Different types of storage gateway options (file vs volume vs tape)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to read the questions
&lt;/h2&gt;

&lt;p&gt;Almost all questions are scenario based and require you to read them quickly. I devised a strategy that worked for me. &lt;/p&gt;

&lt;p&gt;After the first 120 minutes, I had gone through the entire set of questions, completely skipped 5 cause they were too long, and had marked 12 of them for review. This gave me a full hour to review the ones I wasn't sure of.&lt;/p&gt;

&lt;p&gt;The strategy is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You get about 2.5 minutes per question. Do not watch your time while reading the question. Focus only on the question especially the keywords &amp;amp; requirements&lt;/li&gt;
&lt;li&gt;If the question is long or has answers that are long or  involve multiple services - Mark it for review and skip it&lt;/li&gt;
&lt;li&gt;Get through all the small ones first. &lt;strong&gt;Small&lt;/strong&gt; means you can read it quickly and do not have several services involved&lt;/li&gt;
&lt;li&gt;With each question:

&lt;ul&gt;
&lt;li&gt;Read the scenario and requirement &lt;strong&gt;two times&lt;/strong&gt;. Pick up keywords and pivot your answer around them. Read aloud to yourself if you have to. I can't count the number of times I ignored the main requirement (cost-effective, low overhead, speed, on-premises) &lt;/li&gt;
&lt;li&gt;Eliminate the wrong ones immediately. Wrong ones have clear issues e.g. Archive data to ephemeral storage&lt;/li&gt;
&lt;li&gt;Questions can have single word differences, read each word carefully e.g. instance-data vs user-data&lt;/li&gt;
&lt;li&gt;Once an answer is chosen, read it and map each requirement to the scenario and requirement. Sometimes the obvious answer will miss the requirement&lt;/li&gt;
&lt;li&gt;If you don't know the answer at all, work through process of elimination. Remove the obviously wrong ones and pick the one closest. There will almost always be clues that will lead you to the most probable answer&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Scheduling the exam
&lt;/h2&gt;

&lt;p&gt;I personally recommend you choose the morning slot. This exam unlike the associate ones is not about memory, it is about being alert. Stick to morning timings depending on the commute you have to the center. Try and wrap up the exam before lunch so you won't get hunger pangs at the fag end of the exam. I chose the 10:30AM slot which gave me ample time to do a mock test, get ready and travel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exam day - Game time!
&lt;/h2&gt;

&lt;p&gt;I woke up early on exam day, after a small run, I took the final timed exam from Tutorials Dojo.&lt;/p&gt;

&lt;p&gt;Notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have a good breakfast. You'll be in the exam for 3 hours, you need the energy&lt;/li&gt;
&lt;li&gt;Have a quick walk/run&lt;/li&gt;
&lt;li&gt;Use maps to know the traffic and location of the center. Call the center if you can't find the way on maps. Ask about parking before hand&lt;/li&gt;
&lt;li&gt;Try and reach at least 20 minutes earlier&lt;/li&gt;
&lt;li&gt;Relax yourself, with deep breathing&lt;/li&gt;
&lt;li&gt;There is no set exam time/batch. Once you arrive, they log you in and off you go&lt;/li&gt;
&lt;li&gt;You cannot carry anything in the exam center (no watches, wallets, keys, etc). They'll probably give you a locker for your valuables&lt;/li&gt;
&lt;li&gt;Don't forget your ID cards and other prerequisites as mentioned in the email&lt;/li&gt;
&lt;li&gt;Bathroom breaks are allowed but the exam timer does not pause or stop. Best to take care of things before the exam starts&lt;/li&gt;
&lt;li&gt;Exam centers generally have smaller monitors and uncomfortable chairs &amp;amp; keyboards. Be mentally prepared to not be physically comfortable&lt;/li&gt;
&lt;li&gt;Ask for a pen &amp;amp; paper to write notes. You can only mark the question for review, not the choices. I avoided using the comments feature of the exam. Instead I wrote down the choices I was confused between, on paper and revisited them after I got done with other questions. The questions I skipped I wrote their numbers down in big bold letters&lt;/li&gt;
&lt;li&gt;Remain calm. The first few questions and minutes can be overwhelming. Once you start answering your pace picks up (TD exams will also help pace you)&lt;/li&gt;
&lt;li&gt;There is no penalty for wrong answers, so make sure you attempt all of them&lt;/li&gt;
&lt;li&gt;If you finish early, review only the most ambigous questions. I tried reviewing the majority and wasted a lot of time&lt;/li&gt;
&lt;li&gt;One of the issues I had was, if I had to review a question, I had to go thru it again, losing precious minutes. Train yourself with practice exams&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that's it, if you've practiced and read enough this should be a breeze. You'll see a &lt;strong&gt;PASS&lt;/strong&gt; grade immediately.&lt;/p&gt;

&lt;p&gt;Best of luck!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>career</category>
      <category>cloud</category>
    </item>
    <item>
      <title>The perfect dev environment using AWS for large databases</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Tue, 16 Mar 2021 07:23:12 +0000</pubDate>
      <link>https://dev.to/arifamirani/the-perfect-dev-environment-using-aws-for-large-databases-jkc</link>
      <guid>https://dev.to/arifamirani/the-perfect-dev-environment-using-aws-for-large-databases-jkc</guid>
      <description>&lt;p&gt;During the start of a product, the database is quite small, mostly empty or populated with dummy data. Developers prefer having a local database instance running their favorite database like PostgreSQL or MySQL. This frees them from dependencies, and has the added advantage of having near zero latency.&lt;/p&gt;

&lt;p&gt;As the product grows, inevitably the database also increases in size. In some cases, replicating production issues also requires that the code be run on a copy of the production database. Which eventually leads to often databases for development environments being created by copying a database dump from production and then importing that database dump. And since database dumps are text, they can be highly compressed, which can result in a relatively small file to copy over. But the import of the dump can still take lots of time and cause high load on the dev computer as it rebuilds tables and indexes. As long as your data is relatively small, this process may be perfectly acceptable.&lt;/p&gt;

&lt;p&gt;Our team also went through the journey of setting up an acceptable strategy to work with our ever-growing database on Cassandra.&lt;/p&gt;

&lt;h2&gt;
  
  
  Early days
&lt;/h2&gt;

&lt;p&gt;As we started out, the database was small. Feature set was growing faster than usage, which meant we had to resort to dummy data. We wrote a script to generate dummy data based on several parameters and usecases. It worked fairly well and kept everything on the developer's device.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Flocal-db-gen.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Flocal-db-gen.png" alt="Local DB"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For production issues, as the customers were beta and friendly, with consent, we seldom restored a copy of our daily backup to reproduce issues in our development environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fprod-issues-p1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fprod-issues-p1.png" alt="Prod restore"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Zero to one
&lt;/h2&gt;

&lt;p&gt;MetroLeads data strategy is built on a schemaless model. Although we know the shape of the data, we rarely can count on it being complete or consistent. As the data passes through the data pipeline, it gets normalized for consumption by various stakeholders. When the product included features such as 3rd party integration, bring-your-vendor models, the situation was exacerbated. Data grew exponentially to be housed on a developer's laptop. The requirement of a consistent database for multiple microservices to run increased further.&lt;/p&gt;

&lt;p&gt;To combat this situation we introduced the "shrinking process". Shrinking was a way to run a particular backup through a processing pipeline that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Removed all customer data&lt;/li&gt;
&lt;li&gt;Anonymize or scrub remaining data to remove traces of any PII (Personally Identifiable Information)&lt;/li&gt;
&lt;li&gt;Leaves testing sandboxes intact&lt;/li&gt;
&lt;li&gt;Reduced the number of events by time e.g. only keep events of last 7 days&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fshrink-process.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fshrink-process.png" alt="Shrink Process"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers have their own production accounts which are connected to dummy vendors and QA communication stacks. For e.g. we use fake data generators such as &lt;a href="https://www.mockaroo.com/" rel="noopener noreferrer"&gt;Mockaroo&lt;/a&gt; which is my personal favorite and a combination of excel functions to generate large import payloads. &lt;/p&gt;

&lt;p&gt;MetroLeads provides a sandbox for each organization. This makes it easy for us to remove all customer organization data in one-go during the shrinking process. &lt;/p&gt;

&lt;p&gt;Over a period of time we extended the shrinking the process to target a specific organization. This allowed us to run the same scrubbing process on a customer account without compromising security or data policies. &lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling up
&lt;/h2&gt;

&lt;p&gt;We ran this process on our tools server which went from being a &lt;code&gt;m3.medium&lt;/code&gt; to &lt;code&gt;m4.2xlarge&lt;/code&gt; to handle simultaneous requests. We ended up timing them so that the load did not overwhelm the server. This was not scalable although worked for quite a long time.&lt;/p&gt;

&lt;p&gt;Subsequently we hit an upper limit and decided to run the process as a daily cron with the developer accounts being refreshed every night. To avoid this being another tool that developers had to learn, we hooked it up as a Slack bot. Developers could simply run a Slack bot command and within a few hours the database would be made available on S3.&lt;/p&gt;

&lt;h3&gt;
  
  
  The shared solution
&lt;/h3&gt;

&lt;p&gt;With the advent of Mumbai region the latency was no longer a problem. Our development team is mainly in Pune. Latency to the nearest Mumbai region was now below 40ms. Quite alright for our use case because our earlier architecture decisions lent themselves to handle this latency.&lt;/p&gt;

&lt;p&gt;MetroLeads uses a combination of source of truth DB (Cassandra), a search engine (ElasticSearch), a message bus (RabbitMQ) and local caching (Redis). We decided to setup a shared database for all developers in Mumbai region. The idea was to setup all of our database in AWS and only keep Redis local to the developer laptop. As expected this worked really well for 80% of our UI based scenarios. The event processing flow was always meant to handle delays so that was never a problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fshared-setup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fshared-setup.png" alt="Shared Setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We quickly changed our onboarding documentation to not require any database installations. Only setup your python servers with Redis and connect to the shared AWS server that housed all of the remaining databases.&lt;/p&gt;

&lt;h3&gt;
  
  
  The non-shared solution
&lt;/h3&gt;

&lt;p&gt;At times several developers would work on the data set and would inadvertently overwrite changes. Since we already had a blueprint for setting up a remote database, we tweaked the blueprint to restore a shrunk database to any server of choice. This lead to an interesting setup; developers would launch their own servers in the nearest region and restore the blueprint on it. With large databases, this had two problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increased cost of running servers&lt;/li&gt;
&lt;li&gt;Stale data as developers would restore their copy less frequently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After a brainstorming session, we resorted to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch only spot instances for non-shared servers&lt;/li&gt;
&lt;li&gt;Use local port forwarding to switch between shared and non-shared servers&lt;/li&gt;
&lt;li&gt;Kill non-shared servers as early as possible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fnon-shared-setup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fnon-shared-setup.png" alt="Non-shared Setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Port forwarding was a great idea. It let us switch between development servers without changing config every time. We've seen &lt;code&gt;local.env&lt;/code&gt;, &lt;code&gt;local.dev1.env&lt;/code&gt;, &lt;code&gt;local.dev2.env&lt;/code&gt; way too many times. With a port forward, you always point to &lt;code&gt;localhost:9160&lt;/code&gt; but depending on which server you've connected to, it will route it appropriately. On a Mac, we recommend using &lt;a href="https://coretunnel.app/" rel="noopener noreferrer"&gt;Core Tunnel&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fport-forward.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fdev-environment-for-large-databases%2Fport-forward.png" alt="Port Forward Switching"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  QA and other environments
&lt;/h2&gt;

&lt;p&gt;This post only focuses on developer's setup, however, the QA environments are a subset of that problem. Spinning up a new instance with dummy, test or production data is a sinch using the above techniques.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We built these approaches over a number of years, refining and tweaking them for our use case. There is a lot of room to improve and we are constantly striving to learn from others and get better at it.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed the article.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>devops</category>
      <category>aws</category>
      <category>database</category>
    </item>
    <item>
      <title>Project Sentiment Tracker Using AWS Comprehend + Serverless in 1 hour</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Mon, 08 Mar 2021 16:39:07 +0000</pubDate>
      <link>https://dev.to/arifamirani/project-sentiment-tracker-using-aws-comprehend-serverless-in-1-hour-3gj3</link>
      <guid>https://dev.to/arifamirani/project-sentiment-tracker-using-aws-comprehend-serverless-in-1-hour-3gj3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Project is available at &lt;a href="https://st.arif.work/"&gt;Sentiment Tracker&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K-UoBbKJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arif.co/posts/project-sentiment-tracker-aws-comprehend-and-serverless/st-home.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K-UoBbKJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arif.co/posts/project-sentiment-tracker-aws-comprehend-and-serverless/st-home.png" alt="Sentiment Tracker Homepage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github is a great place to contribute and create amazing projects. As the project scales, so does the team and the issues from users. Most are clear, concise and contructive to the project. Developers leave comments and answer these questions. For a large team, maintaining a positive sentiment around the project needs to be tracked. The goal of this project was to provide a fast way to keep a signboard about the overall project sentiment.&lt;/p&gt;

&lt;p&gt;AWS Comprehend, provides a NLP service to provide insights on contextual text including sentiment analysis. I wanted to experiment how quickly we can go from idea to deployment for a trivial service without having to manage any resources. Below is my experience to solve the project sentiment problem using AWS at the heart of the solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Sentiment Tracker
&lt;/h2&gt;

&lt;p&gt;You can view it at: &lt;a href="https://st.arif.work/"&gt;https://st.arif.work/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MZNLiiEn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arif.co/posts/project-sentiment-tracker-aws-comprehend-and-serverless/moods.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MZNLiiEn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arif.co/posts/project-sentiment-tracker-aws-comprehend-and-serverless/moods.png" alt="Sentiments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Processes the recent issues and comments in the project&lt;/li&gt;
&lt;li&gt;Analysis the text for sentiment patterns&lt;/li&gt;
&lt;li&gt;Provides a quick badge for project owners&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technology choices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Analyzer - AWS Comprehend
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/comprehend/"&gt;AWS Comprehend&lt;/a&gt; was the obvious choice for setting up this low volume requirement. The API is easy and clear with no surprises on the usage pricing. Although it can do a lot more, I used it purely for the sentiment requirement. In the future, I hope to use &lt;em&gt;Keyphrase Extraction&lt;/em&gt;, &lt;em&gt;Topic Modeling&lt;/em&gt; and &lt;em&gt;Entity Recognition&lt;/em&gt; to automatically tag issues and comments and even possibly assign them automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Platform - Serverless - AWS Lambda
&lt;/h3&gt;

&lt;p&gt;The API endpoints are simple. Apart from a small Oauth workflow, everything can be achieved without any added complexity. A perfect recipe for a serverless solution using &lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt;. I used &lt;a href="https://github.com/zappa/Zappa"&gt;Zappa&lt;/a&gt; and &lt;a href="https://palletsprojects.com/p/flask/"&gt;Flask&lt;/a&gt; as my deployment framework.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database - DynamoDB
&lt;/h3&gt;

&lt;p&gt;I've never used DynamoDB before in production. There are some nuances to its usage but for this small usecase, it was the easiest to get going. I used &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html"&gt;boto3&lt;/a&gt; to connect to AWS services which is a breeze.&lt;/p&gt;

&lt;h3&gt;
  
  
  Github Connector - PyGithub
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://pygithub.readthedocs.io/"&gt;PyGithub&lt;/a&gt; is the defacto wrapper for &lt;a href="https://developer.github.com/v3"&gt;Github v3 API&lt;/a&gt;. It has a 1:1 mapping to all APIs that Github exposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Presentation - Shields.io
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://shields.io/"&gt;Shields.io&lt;/a&gt; is the gold standard in presenting data points for a Github project. Apart from the vast array of services it supports (I was really surprised how much comes out of the box!), it also allows to integrate your own endpoint. The tracker integrates with shields to give the repo owner a badge that shows the current sentitment of the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tO12lx93--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arif.co/posts/project-sentiment-tracker-aws-comprehend-and-serverless/arch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tO12lx93--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arif.co/posts/project-sentiment-tracker-aws-comprehend-and-serverless/arch.png" alt="Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The idea is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Submit a Github repo name&lt;/li&gt;
&lt;li&gt;Authenticate with Github&lt;/li&gt;
&lt;li&gt;Tracker connects with Github and fetches the most recent issues and comments&lt;/li&gt;
&lt;li&gt;Submit them to AWS Comprehend for analysis&lt;/li&gt;
&lt;li&gt;Store response in cache for 24 hours&lt;/li&gt;
&lt;li&gt;Shields.io makes badge requests which is served from sentiment cache&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Source code
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/kontinuity/github-sentiment-tracker"&gt;https://github.com/kontinuity/github-sentiment-tracker&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TODO
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Oauth support for private repos&lt;/li&gt;
&lt;li&gt;Refresh repo sentiment on updates&lt;/li&gt;
&lt;li&gt;Circumvent Github rate limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope you enjoyed the article.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>machinelearning</category>
      <category>showdev</category>
      <category>aws</category>
    </item>
    <item>
      <title>Pi-hole on Raspberry Pi with IPv6</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Tue, 23 Feb 2021 03:35:23 +0000</pubDate>
      <link>https://dev.to/arifamirani/pi-hole-on-raspberry-pi-with-ipv6-307</link>
      <guid>https://dev.to/arifamirani/pi-hole-on-raspberry-pi-with-ipv6-307</guid>
      <description>&lt;p&gt;I've had a Raspberry Pi 4B sitting in my cabinet for a few months now. I dusted it off and realized that the SD card was busted. Got a replacement 64GB U3 A2 card and got it up and running with Ubuntu server. The primary intended use was to run docker with DB containers that I use for my side projects such as Postgres/MySQL/MongoDB.&lt;/p&gt;

&lt;p&gt;My current home network consists of several routers for WiFi reachability. I try to ensure 5Ghz coverage to all my devices. Due to the concrete walls a single router does not suffice, hence a home made mesh.&lt;/p&gt;

&lt;p&gt;The earlier ISP was a IPv4 only provider. Easy to setup. Worked out of the box. To safeguard other users of the network, I use a privacy sensitive DNS provider, Adguard. The issue with Adguard DNS was no geo proximity. It caused large ping times, as the closest CDN was never picked.&lt;/p&gt;

&lt;p&gt;When I moved to my new ISP which supports IPv6, I decided to also deploy Pi-hole alongwith my docker containers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fnetwork-layout.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fnetwork-layout.png" alt="Home Network Layout"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Pi-hole
&lt;/h3&gt;

&lt;p&gt;Pi-hole is quite easy to deploy, especially with the auto install script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; https://install.pi-hole.net | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once I followed the steps I got Pi-hole running out of the box. As suggested you have to change your router settings to send the IP address of your Pi-hole server as your local DNS. Done and done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fpi-mob-home.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fpi-mob-home.png" alt="Pi-hole home"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem
&lt;/h3&gt;

&lt;p&gt;Ads still show!&lt;/p&gt;

&lt;p&gt;To ensure that I wasn't fudging up DHCP/DNS discovery I used scutil:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scutil &lt;span class="nt"&gt;--dns&lt;/span&gt;

resolver &lt;span class="c"&gt;#1&lt;/span&gt;
  search domain[0] : lan
  nameserver[0] : 2405:201:xxxx:xxxx:xxx:xxxx:4209:2091
  nameserver[1] : fe80::a204:60ff:fe43:3005%en0
  nameserver[2] : 192.168.2.50
  if_index : 6 &lt;span class="o"&gt;(&lt;/span&gt;en0&lt;span class="o"&gt;)&lt;/span&gt;
  flags    : Scoped, Request A records, Request AAAA records
  reach    : 0x00020002 &lt;span class="o"&gt;(&lt;/span&gt;Reachable,Directly Reachable Address&lt;span class="o"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IPv4 looked good. &lt;code&gt;192.168.2.50&lt;/code&gt; is the IP of my pi-hole server.&lt;/p&gt;

&lt;p&gt;However, the IPv6 didn't belong to pi-hole. The default ISP setting was to use stateless IPv6 config and DNS was being advertised by the router. It was using my upstream DNS (ISP) server.&lt;/p&gt;

&lt;p&gt;To fix this, I made the following changes:&lt;/p&gt;

&lt;h3&gt;
  
  
  IPv6 on pi-hole
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Get your Ipv6 address for pi-hole
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ip addr show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Enable IPv6 support on pi-hole&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fph-slaac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fph-slaac.png" alt="IPv6 Slaac"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set the &lt;code&gt;IPV6_ADDRESS&lt;/code&gt; add &lt;code&gt;/etc/pihole/setupVars.conf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This should've already been done during the installation of pi-hole but if isn't, you can manually set it up using the address above&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;IPV4_ADDRESS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.2.50/24
&lt;span class="nv"&gt;IPV6_ADDRESS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2405:xxx:xxxx:xxxx::50/64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A neat trick I learnt while assigning static IPv6 addresses is to the use the same suffix as IPv4. Easy to remember and deal with.&lt;/p&gt;

&lt;p&gt;Restart FTL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl restart pihole-FTL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Enable AAAA query analysis for Pi-hole&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pi-hole by default will only analyse &lt;code&gt;A&lt;/code&gt; queries, so we need to add support for &lt;code&gt;AAAA&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DBINTERVAL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;60
&lt;span class="nv"&gt;MAXDBDAYS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;7
&lt;span class="nv"&gt;AAAA_QUERY_ANALYSIS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes

&lt;/span&gt;&lt;span class="nv"&gt;PRIVACYLEVEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also added a throttling parameters to not wear out the SD Card.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Refresh Pi-hole&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reboot the server. Fetch gravity lists again&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pihole &lt;span class="nt"&gt;-g&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Stateful DHCPv6&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fjio-dhcpv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fjio-dhcpv6.png" alt="ISP Stateful DHCPv6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My attempts at trying to get DHCP v4 and v6 from pi-hole failed. This is probably due to wrong assumptions of SLAAC + RA.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debug Pi-hole&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end I made sure everything is kosher by running diagnosis on Pi-hole&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pihole &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting up Unbound
&lt;/h3&gt;

&lt;p&gt;I do not like using any of the open DNS servers. They have their place and provide a lot of value but it's not for me. I prefer a combo of ad-block and recursive DNS. Unbound works perfectly with Pi-hole. Setting it up is again very simple.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;unbound
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the unbound config to support Pi-hole&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/unbound/unbound.conf.d/pi-hole.conf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Make a note of the port, the default port &lt;code&gt;53&lt;/code&gt; is used by Pi-hole and must be changed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;interface&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0&lt;/span&gt;
    &lt;span class="na"&gt;interface&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;::0&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5335&lt;/span&gt;

    &lt;span class="na"&gt;access-control&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.2.0/24 allow&lt;/span&gt;
    &lt;span class="na"&gt;access-control&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;127.0.0.0 allow&lt;/span&gt;
    &lt;span class="na"&gt;access-control&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2001:db8:dead:beef::/48 allow&lt;/span&gt;

    &lt;span class="c1"&gt;# unbound optimisation&lt;/span&gt;
    &lt;span class="na"&gt;num-threads&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
    &lt;span class="na"&gt;msg-cache-slabs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16&lt;/span&gt;
    &lt;span class="na"&gt;rrset-cache-slabs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16&lt;/span&gt;
    &lt;span class="na"&gt;infra-cache-slabs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16&lt;/span&gt;
    &lt;span class="na"&gt;key-cache-slabs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16&lt;/span&gt;
    &lt;span class="na"&gt;outgoing-range&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;206&lt;/span&gt;
    &lt;span class="na"&gt;so-rcvbuf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4m&lt;/span&gt;
    &lt;span class="na"&gt;so-sndbuf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4m&lt;/span&gt;
    &lt;span class="na"&gt;so-reuseport&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;rrset-cache-size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
    &lt;span class="na"&gt;msg-cache-size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;50m&lt;/span&gt;

    &lt;span class="c1"&gt;# unbound security&lt;/span&gt;
    &lt;span class="na"&gt;do-ip4&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;do-ip6&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;do-udp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;do-tcp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;cache-max-ttl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;86400&lt;/span&gt;
    &lt;span class="na"&gt;cache-min-ttl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3600&lt;/span&gt;
    &lt;span class="na"&gt;hide-identity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;hide-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;minimal-responses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;prefetch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;use-caps-for-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;verbosity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;harden-glue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;harden-dnssec-stripped&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

    &lt;span class="c1"&gt;# download from ftp://ftp.internic.net/domain/named.cache&lt;/span&gt;
    &lt;span class="c1"&gt;# root-hints: "/var/lib/unbound/root.hints"&lt;/span&gt;

    &lt;span class="c1"&gt;# Ensure privacy of local IP ranges&lt;/span&gt;
    &lt;span class="na"&gt;private-domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lan"&lt;/span&gt;
    &lt;span class="na"&gt;private-address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.0.0/16&lt;/span&gt;
    &lt;span class="na"&gt;private-address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;169.254.0.0/16&lt;/span&gt;
    &lt;span class="na"&gt;private-address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;172.16.0.0/12&lt;/span&gt;
    &lt;span class="c1"&gt;# private-address: 10.0.0.0/8&lt;/span&gt;
    &lt;span class="na"&gt;private-address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fd00::/8&lt;/span&gt;
    &lt;span class="na"&gt;private-address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fe80::/10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tweak it as per your needs and restart Unbound&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl restart unbound
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Check unbound
&lt;/h3&gt;

&lt;p&gt;Will pass:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@pi-desktop:~# dig sigok.verteiltesysteme.net @127.0.0.1 &lt;span class="nt"&gt;-p&lt;/span&gt; 5335
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="p"&gt;;&lt;/span&gt; &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; DiG 9.16.6-Ubuntu &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; sigok.verteiltesysteme.net @127.0.0.1 &lt;span class="nt"&gt;-p&lt;/span&gt; 5335
&lt;span class="p"&gt;;;&lt;/span&gt; global options: +cmd
&lt;span class="p"&gt;;;&lt;/span&gt; Got answer:
&lt;span class="p"&gt;;;&lt;/span&gt; -&amp;gt;&amp;gt;HEADER&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt; &lt;span class="no"&gt;opcode&lt;/span&gt;&lt;span class="sh"&gt;: QUERY, status: NOERROR, id: 48863
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;sigok.verteiltesysteme.net.    IN  A

;; ANSWER SECTION:
sigok.verteiltesysteme.net. 3600 IN A   134.91.78.139

;; Query time: 499 msec
;; SERVER: 127.0.0.1#5335(127.0.0.1)
;; WHEN: Tue Feb 23 08:53:17 IST 2021
;; MSG SIZE  rcvd: 71
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Will fail:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@pi-desktop:~# dig sigfail.verteiltesysteme.net @127.0.0.1 &lt;span class="nt"&gt;-p&lt;/span&gt; 5335
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="p"&gt;;&lt;/span&gt; &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; DiG 9.16.6-Ubuntu &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; sigfail.verteiltesysteme.net @127.0.0.1 &lt;span class="nt"&gt;-p&lt;/span&gt; 5335
&lt;span class="p"&gt;;;&lt;/span&gt; global options: +cmd
&lt;span class="p"&gt;;;&lt;/span&gt; Got answer:
&lt;span class="p"&gt;;;&lt;/span&gt; -&amp;gt;&amp;gt;HEADER&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt; &lt;span class="no"&gt;opcode&lt;/span&gt;&lt;span class="sh"&gt;: QUERY, status: SERVFAIL, id: 56203
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;sigfail.verteiltesysteme.net.  IN  A

;; Query time: 1147 msec
;; SERVER: 127.0.0.1#5335(127.0.0.1)
;; WHEN: Tue Feb 23 08:53:43 IST 2021
;; MSG SIZE  rcvd: 57
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  DHCP Confirmation
&lt;/h3&gt;

&lt;p&gt;At least for IPv4, you can run the debug tool to make sure DHCP is correct.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@pi-desktop:~# pihole-FTL dhcp-discover
Scanning all your interfaces &lt;span class="k"&gt;for &lt;/span&gt;DHCP servers
Timeout: 10 seconds

&lt;span class="k"&gt;*&lt;/span&gt; Received 301 bytes from eth0:192.168.2.1
  Offered IP address: 192.168.2.113
  Server IP address: 192.168.2.1
  Relay-agent IP address: N/A
  BOOTP server: &lt;span class="o"&gt;(&lt;/span&gt;empty&lt;span class="o"&gt;)&lt;/span&gt;
  BOOTP file: &lt;span class="o"&gt;(&lt;/span&gt;empty&lt;span class="o"&gt;)&lt;/span&gt;
  DHCP options:
   Message &lt;span class="nb"&gt;type&lt;/span&gt;: DHCPOFFER &lt;span class="o"&gt;(&lt;/span&gt;2&lt;span class="o"&gt;)&lt;/span&gt;
   server-identifier: 192.168.2.1
   lease-time: 3600 &lt;span class="o"&gt;(&lt;/span&gt; 1h &lt;span class="o"&gt;)&lt;/span&gt;
   renewal-time: 1800 &lt;span class="o"&gt;(&lt;/span&gt; 30m &lt;span class="o"&gt;)&lt;/span&gt;
   rebinding-time: 3150 &lt;span class="o"&gt;(&lt;/span&gt; 52m 30s &lt;span class="o"&gt;)&lt;/span&gt;
   netmask: 255.255.255.0
   broadcast: 192.168.2.255
   router: 192.168.2.1
   dns-server: 192.168.2.50
   dns-server: 192.168.2.50
   domain-name: &lt;span class="s2"&gt;"lan"&lt;/span&gt;
   &lt;span class="nt"&gt;---&lt;/span&gt; end of options &lt;span class="nt"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Switching Pi-hole to use unbound
&lt;/h3&gt;

&lt;p&gt;Disable all Upstream DNS servers and add custom DNS that you setup for Unbound.&lt;br&gt;
Use the loopback addresses for Unbound:&lt;/p&gt;

&lt;p&gt;IPv4 - &lt;code&gt;127.0.0.1#5335&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;IPv6 - &lt;code&gt;::1#5335&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fpi-unbound.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fpi-hole-raspberry-pi-ipv6%2Fpi-unbound.png" alt="Unbound with Pi-hole"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That should be it! Hope you enjoyed reading the article.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>opensource</category>
      <category>linux</category>
      <category>learning</category>
    </item>
    <item>
      <title>5 Underrated React Design Systems for 2021</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Fri, 19 Feb 2021 07:33:45 +0000</pubDate>
      <link>https://dev.to/arifamirani/5-underrated-react-design-systems-for-2021-4mnj</link>
      <guid>https://dev.to/arifamirani/5-underrated-react-design-systems-for-2021-4mnj</guid>
      <description>&lt;p&gt;Design systems give your budding project a jump start and more importantly a structure when the project continues to grow. These systems bring in a level of sophistication of thinking and uniformity. Their value lies beyond pre-made CSS/JS assets. Identifying the right design system in the initial phases is crucial for progress. I employ various metrics to pick one such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Community support &amp;amp; acceptance&lt;/li&gt;
&lt;li&gt;Documentation&lt;/li&gt;
&lt;li&gt;a11y/i18n/l10n&lt;/li&gt;
&lt;li&gt;Component library&lt;/li&gt;
&lt;li&gt;Commit rate&lt;/li&gt;
&lt;li&gt;Backers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However when it is time to play aka. a throwaway project for ML/AI or Raspberry Pi, you should experiment with the budding ones. Below are 5 of my current experimental design systems; known and unknown. They may not check all the boxes above however have potential and are super fun. They are not Bootstrap, Ant Design, or Material.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://github.com/grommet/grommet" rel="noopener noreferrer"&gt;Grommet&lt;/a&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;part design system, part framework, and all awesome&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Grommet is a react-based framework that provides accessibility, modularity, responsiveness, and theming in a tidy package.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://grommet.io/" rel="noopener noreferrer"&gt;Demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/grommet/grommet" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fgrommet.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fgrommet.png" alt="Grommet v2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://github.com/zendeskgarden/react-components" rel="noopener noreferrer"&gt;Garden React Components - Zendesk&lt;/a&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The source of truth for tools, standards, and best practices when building products at Zendesk.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Garden is a minimal and clean design system that provides a formidable base for react projects.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://garden.zendesk.com/components" rel="noopener noreferrer"&gt;Demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/zendeskgarden/react-components" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fzendesk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fzendesk.png" alt="Garden React Components - Zendesk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://github.com/uswds/uswds" rel="noopener noreferrer"&gt;U.S. Web Design System (USWDS)&lt;/a&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The U.S. Web Design System helps the federal government build fast, accessible, mobile-friendly websites.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The United States Web Design System includes a library of open source UI components and a visual style guide for U.S. federal government websites.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://designsystem.digital.gov/" rel="noopener noreferrer"&gt;Demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/uswds/uswds" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fuswds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fuswds.png" alt="caption="&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://priceline.github.io/design-system/" rel="noopener noreferrer"&gt;Priceline One&lt;/a&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Priceline.com Design System&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In order to create a consistently great experience for our users, the design system is meant to be the single source of truth for user interface standards for both designers and developers.&lt;/p&gt;

&lt;p&gt;Built off of the work of previous efforts, this project intends to consolidate those ideas into a living, well-documented, and growing system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://priceline.github.io/design-system/" rel="noopener noreferrer"&gt;Demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/pricelinelabs/design-system" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fpriceline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fpriceline.png" alt="Priceline One"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://github.com/JetBrains/ring-ui" rel="noopener noreferrer"&gt;Ring UI - JetBrains&lt;/a&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;A collection of JetBrains Web UI components&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This collection of UI components aims to provide all of the necessary building blocks for web-based products built inside JetBrains, as well as third-party plugins developed for JetBrains' products.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://jetbrains.github.io/ring-ui/master/index.html" rel="noopener noreferrer"&gt;Demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/JetBrains/ring-ui" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fringui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2F5-underrated-react-design-systems-2021%2Fringui.png" alt="Ring UI - JetBrains"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope you had fun reading!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>react</category>
      <category>css</category>
      <category>javascript</category>
    </item>
    <item>
      <title>AWS Cost Optimizations - The Easy Parts</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Sun, 14 Feb 2021 13:07:41 +0000</pubDate>
      <link>https://dev.to/arifamirani/aws-cost-optimizations-the-easy-parts-3ed9</link>
      <guid>https://dev.to/arifamirani/aws-cost-optimizations-the-easy-parts-3ed9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Moving to or starting with AWS (or any cloud provider) comes with an implicit assumption that your business will pay what it uses. Although technically true, most businesses ignore the human aspect. More often than not, developers will make assumptions while allocating resources and end up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overprovisioned resources&lt;/li&gt;
&lt;li&gt;Unused resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, based on our experiences and AWS events/whitepapers, we will outline a few approaches to combat the ramifications of these decisions at any stage of product growth. The earlier we recognize how to manage them constantly, the more cost savings we can achieve.&lt;/p&gt;

&lt;p&gt;Remember saving costs means nothing if either your are spending too much time on it or not achieving your business goals.&lt;/p&gt;

&lt;p&gt;Cost optimization is not a one-time activity. You cannot throw people at the problem once and hope the results will outlive your business goals. Organizations and smaller teams within them are more fluid than ever. They are adapting to the external (Porter's Five Forces) and internal forces being applied to the business. Thus it is prudent on our part to deal with cost optimization as a continous process of optimization.&lt;/p&gt;

&lt;p&gt;One such useful technique is the Deming Cycle. PDCA (Plan-Do-Check-Act) is an iterative four-step management method used in business for the control and continuous improvement of processes and products.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fdeming-cycle-pdca.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fdeming-cycle-pdca.jpg" alt="Deming Cycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our cost optimization problem the four steps map to various opportunities and tools provided by AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plan
&lt;/h2&gt;

&lt;p&gt;When developers launch new services or environments, cost optimization should be one of the metrics that is planned and predicted. Without compromising on functionality, a budget should be set alongwith any predictions on the usage. This can help the team decide on the right instances and regions to use from the get-go.&lt;/p&gt;

&lt;p&gt;In practice, a time horizon of a quarter works great for new services that have no historical data on usage patterns. AWS also provides auto scaling features that can help in quickly reacting to sudden spikes of usage.&lt;/p&gt;

&lt;p&gt;Even if the service is already deployed, minor tweaks in auto scale capacity (ECS or EC2), will allow you to get started on the cost optimization journey.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tools to use
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Auto scaling&lt;/li&gt;
&lt;li&gt;Load tests&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Do
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;you cannot improve what you cannot measure&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With your services, you cannot tweak costs unless you know how they are being utilized and your spend pattern. You must identify your biggest spends and focus on them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Use Tags
&lt;/h4&gt;

&lt;p&gt;First tool to help you measure, is tags. Tags are a crucial and prolific way of categorizing and inspecting utilization of your cloud resources at every level. Tags can be set on resources instances, volumes, EIPs, AMIs, LBs, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Ftags-create.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Ftags-create.jpg" alt="Tags Everywhere"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Standardize on tags across your teams and BU. Add the bare minimum dimensions and with meaningful information for the org, using each tag such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;environment - &lt;code&gt;prod&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;team - &lt;code&gt;frontend&lt;/code&gt;, &lt;code&gt;backend&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;geography - &lt;code&gt;eu&lt;/code&gt;, &lt;code&gt;apac&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tags will give you different vantage points of your current and forecasted costs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Auto scaling
&lt;/h4&gt;

&lt;p&gt;Auto scaling plays an important in mitigating unpredicted service behaviors. Although we may plan for them, a sudden spike needs to be handled and not compromise on service performance or user experience. Setup auto scaling where possible with EC2, Fargate or ECS containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fec2-auto-scaling.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fec2-auto-scaling.png" alt="Auto scaling"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Compute Optimizer
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. &lt;strong&gt;Compute Optimizer is available to you at no additional charge&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The best part about CO is it not only recommends instance types but also visualize a what-if scenario. This helps you understand how the recommended instance would have performed on the recommended instance type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Faws-compute-optimizer-diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Faws-compute-optimizer-diagram.png" alt="AWS Compute Optimizer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Apart from AWS CO suggestions, you can also consider optimizing on cheaper instance alternatives such as the &lt;a href="https://aws.amazon.com/ec2/graviton/" rel="noopener noreferrer"&gt;AWS Graviton&lt;/a&gt; instances. If the workload can be run on ARM then you can get more performance at a cheaper monthly price.&lt;/p&gt;

&lt;h4&gt;
  
  
  Consolidate control planes
&lt;/h4&gt;

&lt;p&gt;If you're running EKS or ECS, you've got a control plane that is orchestrating the data plane. With EKS, you pay $0.10 per hour for each Amazon EKS cluster that you create. You can use a single Amazon EKS cluster to run multiple applications by taking advantage of Kubernetes namespaces and IAM security policies.&lt;/p&gt;

&lt;p&gt;Hence, it's a good idea to analyse all your clusters and consolidate them into one or few as your architecture permits. Maybe reduce that by a couple different teams or even application types if you have a lot of applications that are say machine learning jobs or web applications you can break those down into different types of clusters that use different types of compute in each cluster that's optimized for those workloads. The larger the clusters, the more you can share some resources and bin pack things onto ec2 instances.&lt;/p&gt;

&lt;h4&gt;
  
  
  On-demand vs spot
&lt;/h4&gt;

&lt;p&gt;If you architecture is fault tolerant and can restart failed processed or jobs, you can try to move away from on-demand instances to spot instances. This has helped us personally shave off almost 60% of our primary costs. To ensure fault tolerance via uptime, we have a hybrid approach of 20% on-demand and 80% spot instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fon-demand-vs-spot-savings.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fon-demand-vs-spot-savings.png" alt="Cost savings by type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Enable alerts and healthchecks
&lt;/h4&gt;

&lt;p&gt;Your SLAs need to be met while you're optimizing. To react within your SLA, you must enable alerts and healthchecks where auto scaling does not help. Using AWS Cloudwatch and others, you should create a notification system to handle failures and spikes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tools to use
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Auto scaling&lt;/li&gt;
&lt;li&gt;AWS Compute Optimizer&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Check
&lt;/h2&gt;

&lt;p&gt;Monitoring for both, cost analysis and performance variance is essential. For cost, you have two options, &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/" rel="noopener noreferrer"&gt;Cost Explorer&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html" rel="noopener noreferrer"&gt;Usage Reports&lt;/a&gt;. Cost explorer is more realtime and gives you an insight on your current spend whereas with CUR (Cost and Usage Reports), the current month’s billing data will be delivered to an Amazon S3 bucket that you designate during set-up. You can receive hourly, daily or monthly reports that break out your costs by product or resource and by tags that you define yourself. AWS updates the report in your bucket at least once per day. After setting up a Cost and Usage Report, you will receive the current month’s billing data and daily updates in the same Amazon S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fcost-by-tag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fcost-by-tag.png" alt="Cost Filtering by Tag"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can read an in-depth article on &lt;a href="https://medium.com/better-programming/aws-cost-allocation-tags-and-cost-reduction-8a0e46e39e75" rel="noopener noreferrer"&gt;AWS Cost Allocation Tags and Cost Reduction&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should set alerts on your spend which will automatically alert your predicted costs are exceeded.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fbilling-alarm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Faws-cost-optimizations-the-easy-parts%2Fbilling-alarm.png" alt="Billing alarms"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Tools to use
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Cost Explorer&lt;/li&gt;
&lt;li&gt;Usage Reports&lt;/li&gt;
&lt;li&gt;AWS Cloudwatch&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Act
&lt;/h2&gt;

&lt;p&gt;Once your optimization efforts have yielded results, it is time to make them the norm and perform hygenic actions.&lt;/p&gt;

&lt;p&gt;First off, after a cycle of activities (quarter, product release, Black Friday sale), clear your unused resources. Identify them by running reports or checking usage, such as ALBs with no target groups, unused EIPs, very old snapshots, S3 buckets that have no new objects being added, unmounted EBS volumes, etc. Remove unhealthy or unused instances that may be up due to target capacity in Spot Requests.&lt;/p&gt;

&lt;p&gt;Standardize the clusters and ensure new services are launched within the same clusters. Use AWS CDK or Cloudformation to provide templates to developers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tools to use
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Formation&lt;/li&gt;
&lt;li&gt;AWS CDK&lt;/li&gt;
&lt;li&gt;AWS Cloudwatch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finance folks that look at the bill once and pay it do not understand the business value of each line item. It is up to the developers to understand the workloads and the compute environment to continously optimize and reduce costs.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed reading the article.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>Top 10 tools for the full stack developer</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Sun, 31 Jan 2021 18:11:05 +0000</pubDate>
      <link>https://dev.to/arifamirani/top-10-tools-for-the-full-stack-developer-3gh2</link>
      <guid>https://dev.to/arifamirani/top-10-tools-for-the-full-stack-developer-3gh2</guid>
      <description>&lt;p&gt;Mac! I absolutely love MacOS for development. It gives me the power of Unix with a whole lot of convenience. Can it be replaced with Unix? YES! Do I want to? No. As a full stack developer and CTO of my company, I spend 80% of my day in three apps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shell/Terminal&lt;/li&gt;
&lt;li&gt;IDE&lt;/li&gt;
&lt;li&gt;Browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These three are the same no matter which OS I use. I see MacOS simply as a shell for my Unix environment. On a fresh install, I'll setup iTerm, oh-my-zsh, and install my most used packages from brew. I prefer brew over DMG or other package installations.&lt;/p&gt;

&lt;p&gt;Listed below are top 10 packages that I use with zsh and iTerm. Not all are brew based though. Allons-y mon ami!&lt;/p&gt;

&lt;h2&gt;
  
  
  awless
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fawless.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fawless.png" alt="Awless"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/wallix/awless" rel="noopener noreferrer"&gt;awless&lt;/a&gt; is a powerful, innovative and small surface command line interface (CLI) to manage Amazon Web Services.&lt;/p&gt;

&lt;p&gt;awless is my go to tool to avoid using the clunky AWS console. Although awless does not cover all services, the most important ones are covered i.e. EC2 and RDS. I can start/stop my instances, check on health, etc across regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  z - jump around
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fz.gif" alt="Z"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rupa/z" rel="noopener noreferrer"&gt;z&lt;/a&gt; tracks your most used directories, based on 'frecency'.&lt;/p&gt;

&lt;p&gt;After  a  short  learning  phase, z will take you to the most 'frecent' directory that matches ALL of the regexes given on the command line, in order.&lt;/p&gt;

&lt;p&gt;Working on several projects; front-end and backend requires moving around directories very rapidly. We base our project names on &lt;a href="https://en.wikipedia.org/wiki/Marvel_Cinematic_Universe" rel="noopener noreferrer"&gt;MCU&lt;/a&gt; characters which works very well for z.&lt;/p&gt;

&lt;p&gt;z will take some time before it becomes really good at understanding where you want to jump. Once it does, though it saves a ton of time as you use shorter target names.&lt;/p&gt;

&lt;p&gt;E.g. I'd start by using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;z bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But now it's down to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;z bi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  aria2
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Faria2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Faria2.png" alt="aria2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aria2.github.io/" rel="noopener noreferrer"&gt;aria2&lt;/a&gt; is a lightweight multi-protocol &amp;amp; multi-source command-line download utility. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. aria2 can be manipulated via built-in JSON-RPC and XML-RPC interfaces.&lt;/p&gt;

&lt;p&gt;Downloading large files such as backups or log files is a regular occurence. I prefer to use multiple segments and resume capability. Suprisingly, browsers can only resume and in some cases that does not work as intended. I use aria2 for all large files and have a zsh alias to allow for multiple segments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alias a2c='aria2c --max-connection-per-server=4 --min-split-size=1M --file-allocation=none '
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  bat
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fbat.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fbat.jpg" alt="bat"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sharkdp/bat" rel="noopener noreferrer"&gt;bat&lt;/a&gt; is a &lt;code&gt;cat&lt;/code&gt; clone that supports syntax highlighting for a large number of programming and markup languages. It is also pretty smart when it comes to showing binary files. Supports line numbering, pagination, etc. All of this is quite helpful while browsing through files.&lt;/p&gt;

&lt;p&gt;I have &lt;code&gt;cat&lt;/code&gt; aliased to &lt;code&gt;bat&lt;/code&gt; and &lt;code&gt;cat&lt;/code&gt; aliased to &lt;code&gt;ccat&lt;/code&gt; for the times when I just need to echo and &lt;code&gt;pbcopy&lt;/code&gt; it.&lt;/p&gt;

&lt;h2&gt;
  
  
  ripgrep/rg/ack
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Frg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Frg.jpg" alt="rg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ripgrep is a line-oriented search tool that recursively searches your current directory for a regex pattern. By default, ripgrep will respect your .gitignore and automatically skip hidden files/directories and binary files. ripgrep has first class support on Windows, macOS and Linux, with binary downloads available for every release. ripgrep is similar to other popular search tools like The Silver Searcher, ack and grep.&lt;/p&gt;

&lt;p&gt;I moved from standard grep to rg mostly for code search but it has now taken over all of my search use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  gr (mixu/gr)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fgr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fgr.png" alt="gr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/mixu/gr" rel="noopener noreferrer"&gt;gr&lt;/a&gt; is a multiple git repository management tool. Managing all the repos in one command is a time saving tool. Check status of all repos? One command. Update all repos? One command. It hasn't been updated in a while but it still works as intended.&lt;/p&gt;

&lt;p&gt;With support for auto repo detection, tags and many other features, it is an indispensable tool in my shell kit.&lt;/p&gt;

&lt;h2&gt;
  
  
  httpie
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fhttpie.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fhttpie.gif" alt="httpie"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://httpie.io/" rel="noopener noreferrer"&gt;HTTPie&lt;/a&gt; (aitch-tee-tee-pie) is a user-friendly command-line HTTP client for the API era. It comes with JSON support, syntax highlighting, persistent sessions, wget-like downloads, plugins, and more.&lt;/p&gt;

&lt;p&gt;I use Postman only for complicated use cases. My go to tool for regular HTTP API calls is the command line httpie. Works amazingly well and does not have the GUI clunkiness of Postman.&lt;/p&gt;

&lt;h2&gt;
  
  
  parallel
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.gnu.org/software/parallel/" rel="noopener noreferrer"&gt;GNU parallel&lt;/a&gt; is a command line tool for running jobs in parallel. It allows you to do a ton of stuff when working with multiple files. It goes beyond mere execution. The docs can be difficult to work with initially but once you get the hang of it, it becomes essential.&lt;/p&gt;

&lt;p&gt;One of my use case, is to redeploy containers across multiple aws clusters. I could use a for loop to do it but parallel makes it much easier. Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parallel aws --region us-east-1 ecs update-service --service apps-svc-prod --force-new-deployment --cluster {} ::: prod-a prod-b prod-c
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will run the same command with 3 different arguments in parallel. You get the power of replacement with parallelism.&lt;/p&gt;

&lt;h2&gt;
  
  
  mtr
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fmtr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fmtr.png" alt="mtr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://linux.die.net/man/8/mtr" rel="noopener noreferrer"&gt;mtr&lt;/a&gt; is a powerful tool that enables administrators to diagnose and isolate networking errors and provide reports of network status to upstream providers. MTR represents an evolution of the traceroute command by providing a greater data sample as if augmenting traceroute with ping output.&lt;/p&gt;

&lt;h2&gt;
  
  
  exa
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fexa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Ftop-10-tools-full-stack-development%2Fexa.png" alt="exa"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ogham/exa" rel="noopener noreferrer"&gt;exa&lt;/a&gt; is a modern replacement for the venerable file-listing command-line program ls that ships with Unix and Linux operating systems, giving it more features and better defaults. It uses colours to distinguish file types and metadata. It knows about symlinks, extended attributes, and Git. And it’s small, fast, and just one single binary.&lt;/p&gt;

&lt;p&gt;I've configured exa as a replacement with many options as aliases.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;l='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store"'
la='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --extended --tree --level=1'
lac='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --sort=accessed --time=accessed'
lar='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --extended --tree --level=2'
lara='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --extended --tree'
larr='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --extended --tree --level=3'
larrr='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --extended --tree --level=4'
larrrr='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --extended --tree --level=5'
lb='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --sort=size'
lch='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --sort=changed --time=changed'
lcr='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --sort=created --time=created'
ll='exa -lhgF'
lm='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --sort=modified --time=modified'
lr='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store|node_modules" --git-ignore --tree --level=2'
lra='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store|node_modules" --git-ignore --tree'
lrr='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store|node_modules" --git-ignore --tree --level=3'
lrrr='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store|node_modules" --git-ignore --tree --level=4'
lrrrr='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store|node_modules" --git-ignore --tree --level=5'
lt='exa --long --all --header --classify --group --color-scale --ignore-glob=".git|.DS_Store" --modified --changed --created --accessed'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hope you guys liked the post!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>opensource</category>
      <category>productivity</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>Service Mesh - Introduction</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Sun, 31 Jan 2021 14:23:07 +0000</pubDate>
      <link>https://dev.to/arifamirani/service-mesh-introduction-4pbd</link>
      <guid>https://dev.to/arifamirani/service-mesh-introduction-4pbd</guid>
      <description>&lt;p&gt;As a development team matures and moves across various stages of code organization and system design, the deployment layout also adapts. Change can be a function of product growth, team size changes, technology decisions, or a combination.&lt;/p&gt;

&lt;p&gt;General progression is from a monolith to handful of homogenous microservices. As the product gets diverse and team size grows, these microservices become heterogeneous. They can use different languages, servers, API end points, etc.&lt;/p&gt;

&lt;p&gt;An explosion in microservices requires each team to deal with the same set of issues and service meshes provides a non-intrusive albeit expensive way to solve them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a service mesh
&lt;/h2&gt;

&lt;p&gt;Wikipedia describes it as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In software architecture, a service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between microservices, often using a sidecar proxy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A sidecar proxy is an application design pattern which abstracts certain features, such as inter-service communications, monitoring and security, away from the main architecture to ease the tracking and maintenance of the application as a whole.&lt;/p&gt;

&lt;p&gt;A sidecar will sit on the same node (or pod) as the service instance and externalize certain network traffic related functionality. &lt;/p&gt;

&lt;p&gt;In practice, it would look something like the architecture below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fsm-concept.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fsm-concept.jpg" alt="Source: AWS Whitepaper"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having such a dedicated communication layer can provide a number of benefits, such as providing observability into communications, providing secure connections, or automating retries and backoff for failed requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key benefits
&lt;/h2&gt;

&lt;p&gt;Service meshes provide 3 key benefits to the services they connect with&lt;/p&gt;

&lt;h3&gt;
  
  
  Telemetry
&lt;/h3&gt;

&lt;p&gt;Microservices distribute the fabric of the application into manageable pieces. However, the service landscape behaves as a single unit. Data passes across the service landscape to serve a single request. The ability to see the data for the same request across services as a single unit is desirable for debugging, troubleshooting, and analysis. This is called observability.&lt;/p&gt;

&lt;p&gt;Service mesh intercepts all traffic for the service and hence is in a unique position to provide telemetry data to a central server for each request. Visualization dashboards can be connected with the telemetry data to get an x-ray of the service performance at a granular level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fservice-detail-grafana.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fservice-detail-grafana.jpg" alt="Service Detail"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Full dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fdashboard-with-traffic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fdashboard-with-traffic.png" alt="Full dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Traffic Control
&lt;/h3&gt;

&lt;p&gt;Service mesh use proxies such as Envoy to intelligently route traffic. Traffic routing allows interesting use-cases such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A/B Testing&lt;/li&gt;
&lt;li&gt;Canary releases&lt;/li&gt;
&lt;li&gt;Load/Latency based traffic handling&lt;/li&gt;
&lt;li&gt;Circuit breaking&lt;/li&gt;
&lt;li&gt;Retries&lt;/li&gt;
&lt;li&gt;Timeouts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is the traffic route architecture for Istio. It contains Istio specific components but gives a general idea about how traffic routing works with service mesh. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fistio-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fistio-architecture.png" alt="Istio Archiecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most important thing to remember about service mesh vis-a-vis traffic control is that, service mesh features are destination oriented. In other words, service meshes are well suited to balance individual calls across a number of destination instances, but rather unsuitable to control traffic from a number of sources to an individual destination or to control traffic across an entire service landscape, for that matter.&lt;/p&gt;

&lt;p&gt;Because service mesh control extends from Layer 4 into Layer 5 and above, some also offer development teams the ability to implement resiliency patterns like retries, timeouts and deadlines as well as more advanced patterns like circuit breaking, canary releases, and A/B releases.&lt;/p&gt;

&lt;p&gt;Service mesh focuses on east-west network rather than north-south. This means external communication or providing endpoints for external consumption is still left to the traditional load balancers and API gateways.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fsm-vs-apig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fsm-vs-apig.png" alt="Comparison of API gateway and Service mesh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy Enforcement
&lt;/h3&gt;

&lt;p&gt;A service mesh also supports the implementation and enforcement of cross cutting security requirements, such as providing service identity (via x509 certificates), enabling application-level service/network segmentation (e.g. "service A" can communicate with "service B", but not "service C") ensuring all communication is encrypted (via TLS), and ensuring the presence of valid user-level identity tokens or "passports."&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;Service meshes have their own challenges, both technical and business. &lt;/p&gt;

&lt;h3&gt;
  
  
  Latency and cost
&lt;/h3&gt;

&lt;p&gt;Due to the added layers of management, performance can suffer in some cases. Case in point, Istio publishes its own latency benchmarks as opposed to non service mesh deployments. You can read more about it at &lt;a href="https://istio.io/latest/docs/ops/deployment/performance-and-scalability/#latency-for-istio-hahahugoshortcode-s2-hbhb" rel="noopener noreferrer"&gt;Istio Performance and Scalability&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fistio-latency.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farif.co%2Fposts%2Fservice-mesh-introduction%2Fistio-latency.jpg" alt="Istio Latency Benchmarks"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Smaller setups
&lt;/h3&gt;

&lt;p&gt;If you have a small team or set of microservices, service mesh might be an overkill. Alternatives include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An in-process solution, a library such as &lt;a href="https://github.com/Netflix/Hystrix" rel="noopener noreferrer"&gt;Netflix Hystrix&lt;/a&gt; or &lt;a href="https://twitter.github.io/finagle/" rel="noopener noreferrer"&gt;Twitter Finagle&lt;/a&gt; might be a more appropriate and effective way to address your service communication pain points.&lt;/li&gt;
&lt;li&gt;There’s little need to abstract the details away, hence a service mesh might be an overkill.&lt;/li&gt;
&lt;li&gt;You might want to avoid the added complexity of sidecar proxies (costs, debugging, etc.) making introducing a service mesh too early in the process might be counterproductive. &lt;/li&gt;
&lt;li&gt;Immaturity of the technology in general, lack of hands-on experience, or size of the community behind solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Options
&lt;/h2&gt;

&lt;p&gt;Popular service meshes include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt;&lt;br&gt;
Istio is an extensible open-source service mesh built on Envoy, allowing teams to connect, secure, control, and observe services. Open-sourced in 2017, Istio is an ongoing collaboration between IBM and Google, which contributed the original components, as well as Lyft, which donated Envoy in 2017 to the Cloud Native Computing Foundation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://linkerd.io/" rel="noopener noreferrer"&gt;Linkerd&lt;/a&gt;&lt;br&gt;
Linkerd is an "ultralight, security-first service mesh for Kubernetes," according to the website. It's a developer favorite, with incredibly easy setup (purportedly 60 seconds to install to a Kubernetes cluster). Instead of Envoy, Linkerd uses a fast and lean Rust proxy called linkerd2-proxy, which was built explicitly for Linkerd.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.hashicorp.com/products/consul/multi-platform-service-mesh/" rel="noopener noreferrer"&gt;Consul Connect&lt;/a&gt;&lt;br&gt;
Consul Connect, the service mesh from HashiCorp, focuses on routing and segmentation, providing service-to-service networking features through an application-level sidecar proxy. Consult Connect emphasizes application security, with proxies offering mutual Transport Layer Security (TLS) connections to applications for authorization and encryption.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/Kong/kuma" rel="noopener noreferrer"&gt;Kuma&lt;/a&gt;&lt;br&gt;
Kuma, from Kong, prides itself on being a usable service mesh alternative. Kuma is a platform-agnostic control plane built on Envoy. Kuma provides networking features to secure, observe, route, and enhance connectivity between services. Kuma supports Kubernetes in addition to virtual machines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.mae.sh/" rel="noopener noreferrer"&gt;Maesh &lt;/a&gt;&lt;br&gt;
Maesh, the container-native service mesh by Containous, bills itself as lightweight and more straightforward to use than other service meshes on the market. While other meshes build on top of Envoy, Maesh adopts Traefik, an open-source reverse proxy and load balancer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Supporting technologies within this space include: Layer 7-aware proxies, such as Envoy, HAProxy, NGINX, and MOSN; and service mesh orchestration, visualization, and understandability tooling, such as SuperGloo, Kiali, and Dive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Glossary
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;API gateway:&lt;/strong&gt; Manages all ingress (north-south) traffic into a cluster, and provides additional. It acts as the single entry point into a system and enables multiple APIs or services to act cohesively and provide a uniform experience to the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consul:&lt;/strong&gt; A Go-based service mesh from HashiCorp.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Control plane:&lt;/strong&gt; Takes all the individual instances of the data plane (proxies) and turns them into a distributed system that can be visualized and controlled by an operator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data plane:&lt;/strong&gt; A proxy that conditionally translates, forwards, and observes every network packet that flows to and from a service network endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;East-West traffic:&lt;/strong&gt; Network traffic within a data center, network, or Kubernetes cluster. Traditional network diagrams were drawn with the service-to-service (inter-data center) traffic flowing from left to right (east to west) in the diagrams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Envoy Proxy:&lt;/strong&gt; An open-source edge and service proxy, designed for cloud-native applications. Envoy is often used as the data plane within a service mesh implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingress traffic:&lt;/strong&gt; Network traffic that originates from outside the data center, network, or Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Istio:&lt;/strong&gt; C++ (data plane) and Go (control plane)-based service mesh that was originally created by Google and IBM in partnership with the Envoy team from Lyft.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes:&lt;/strong&gt; A CNCF-hosted container orchestration and scheduling framework that originated from Google.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kuma:&lt;/strong&gt; A Go-based service mesh from Kong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linkerd:&lt;/strong&gt; A Rust (data plane) and Go (control plane) powered service mesh that was derived from an early JVM-based communication framework at Twitter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maesh:&lt;/strong&gt; A Go-based service mesh from Containous, the maintainers of the Traefik API gateway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MOSN:&lt;/strong&gt; A Go-based proxy from the Ant Financial team that implements the (Envoy) xDS APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;North-South traffic:&lt;/strong&gt; Network traffic entering (or ingressing) into a data center, network, or Kubernetes cluster. Traditional network diagrams were drawn with the ingress traffic entering the data center at the top of the page and flowing down (north to south) into the network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proxy:&lt;/strong&gt; A software system that acts as an intermediary between endpoint components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Segmentation:&lt;/strong&gt; Dividing a network or cluster into multiple sub-networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service mesh:&lt;/strong&gt; Manages all service-to-service (east-west) traffic within a distributed (potentially microservice-based) software system. It provides both functional operations, such as routing, and nonfunctional support, for example, enforcing security policies, quality of service, and rate limiting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Mesh Interface (SMI):&lt;/strong&gt; A work-in-progress standard interface for service meshes deployed onto Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service mesh policy:&lt;/strong&gt; A specification of how a collection of services/endpoints are allowed to communicate with each other and other network endpoints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sidecar:&lt;/strong&gt; A deployment pattern, in which an additional process, service, or container is deployed alongside an existing service (think motorcycle sidecar).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single pane of glass:&lt;/strong&gt; A UI or management console that presents data from multiple sources in a unified display.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traffic shaping:&lt;/strong&gt; Modifying the flow of traffic across a network, for example, rate limiting or load shedding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traffic shifting:&lt;/strong&gt; Migrating traffic from one location to another.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Dart null safety - Solving the billion-dollar mistake</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Mon, 23 Nov 2020 18:29:37 +0000</pubDate>
      <link>https://dev.to/arifamirani/dart-null-safety-solving-the-billion-dollar-mistake-4bd1</link>
      <guid>https://dev.to/arifamirani/dart-null-safety-solving-the-billion-dollar-mistake-4bd1</guid>
      <description>&lt;p&gt;In a 2009 talk, &lt;strong&gt;Tony Hoare&lt;/strong&gt; traced the invention of the null pointer to his design of the Algol W language and called it a "mistake":&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;small&gt;Source: &lt;a href="https://en.wikipedia.org/wiki/Void_safety"&gt;https://en.wikipedia.org/wiki/Void_safety&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What is null safety?
&lt;/h1&gt;

&lt;p&gt;Null was devised as a special value to represent the &lt;strong&gt;intentional&lt;/strong&gt; absence of any value. This is different from an &lt;code&gt;empty string&lt;/code&gt;, a &lt;code&gt;zero int&lt;/code&gt; or &lt;code&gt;false&lt;/code&gt;. It may or may not exist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Color&lt;/span&gt; &lt;span class="n"&gt;favoriteColor&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;favoriteColor&lt;/code&gt; can be either a &lt;code&gt;Color&lt;/code&gt; object or &lt;code&gt;null&lt;/code&gt;. This distinction is important understand. Developers may incorrectly refer to the null value as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Color object which has a null value&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On an average, 80 - 90% of variables in our code will have a value. An &lt;code&gt;int&lt;/code&gt; will be initialized to a number or is set before being used. This is valid and no harm will fall upon your app. But as we all know, this happens more than 10% of the time:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C1YXb5Va--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9dcz7ljlcgm4y24h79gp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C1YXb5Va--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9dcz7ljlcgm4y24h79gp.jpeg" alt="Null pointer everywhere"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To be clear, null-safety does not mean nulls are bad. On the contrary, a null value has valid use cases. For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;firstName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;lastName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above case, an application can assume &lt;code&gt;firstName&lt;/code&gt; and &lt;code&gt;lastName&lt;/code&gt; will be present and may perform no checks. &lt;code&gt;middleName&lt;/code&gt; may not be present, in which case, it will either be a &lt;code&gt;String&lt;/code&gt; or &lt;code&gt;null&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Null safety is crucial to avoid human errors. Null safety provides a declarative mechanism to understand application logic.&lt;/p&gt;

&lt;p&gt;Let's contrast a code block with and without null safety. &lt;/p&gt;

&lt;p&gt;Without null safety, a typical code block would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;middleAllCaps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toUpperCase&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It would work in most cases, and when &lt;code&gt;middleName&lt;/code&gt; is null we would encounter the dreaded Null error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Uncaught TypeError: Cannot read property 'toUpperCase$0' of nullError: 
TypeError: Cannot read property 'toUpperCase$0' of null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Show the above to a developer, and you would get a staunch reply, "No worries! I got this", and a null check would appear.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;middleAllCaps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;''&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toUpperCase&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GjlePhuw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hoq5jeutggf4mdu3p8k3.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GjlePhuw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hoq5jeutggf4mdu3p8k3.jpeg" alt="NPE"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again, this is a good answer but as developers we forget to do this before going to production, and that is a problem.&lt;br&gt;
Why not let the language help us deal with &lt;strong&gt;potential&lt;/strong&gt; null variables. &lt;/p&gt;

&lt;p&gt;With null safety on, and we'll talk about how to enable it for dart, we have options of indicating "nullability".&lt;/p&gt;
&lt;h2&gt;
  
  
  Non nullable
&lt;/h2&gt;

&lt;p&gt;By default, with null safety on, all variables are non nullable. This will be valid for 80-90% of your code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;middleAllCaps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toUpperCase&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above will not compile because we have indicated &lt;code&gt;middleName&lt;/code&gt; as non-nullable but not initialized it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Field 'middleName' should be initialized because its type 'String' doesn't allow null.
  String middleName;
           ^^^^^^^^^^
Error: Compilation failed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One solution is to add the constructor with required parameter. The other is &lt;code&gt;late&lt;/code&gt; keyword which we will talk about later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;Person&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;middleAllCaps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toUpperCase&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In essence, Dart will perform flow analysis to ensure a non-nullable variable is initialized.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nullable or the ?
&lt;/h2&gt;

&lt;p&gt;You can indicate to Dart that a variable is nullable i.e. it can have a value or not. Nullable variables do have the potential for blowing up. Dart will use flow analysis to prove that no such error can occur. If it cannot it will ask the developer to handle it during compilation itself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;middleAllCaps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toUpperCase&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would cause a compilation error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Method 'toUpperCase' cannot be called on 'String?' because it is potentially null.
  return p.middleName.toUpperCase();
                        ^^^^^^^^^^^
Error: Compilation failed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why? The &lt;code&gt;?&lt;/code&gt; indicates that the variable can have a null value. Dart's flow analysis will identify a potential null error and halt compilation. The developer is expected to explicitly handle the condition such that &lt;code&gt;p.middleName&lt;/code&gt; is not accessed or read when the value is null. Or in other words, add a null check.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;middleAllCaps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;toUpperCase&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="s"&gt;''&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So far so good! Dart's flow analysis will detect the null check and continue compilation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nullable force override or the !
&lt;/h2&gt;

&lt;p&gt;Dart's flow analysis will not detect setters especially where classes are involved. What if you have a class with no constructor but you ensure initialization outside of it. Dart may not understand that flow in which case you want to force Dart to assume the value will always be available. You do that by adding the &lt;code&gt;!&lt;/code&gt; to the accessor.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;middleAllCaps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// The trailing ! symbol will skip null check&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;!.&lt;/span&gt;&lt;span class="na"&gt;toUpperCase&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Person&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'Fancy'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;middleAllCaps&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The developer is responsible for setting the value before accessing it. There may be a handful of cases where overriding null check is acceptable but generally  considered a code smell.&lt;/p&gt;

&lt;h2&gt;
  
  
  required keyword
&lt;/h2&gt;

&lt;p&gt;When creating classes using positional non-nullable fields, Dart's flow analysis may not detect setting of variables. E.g.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="n"&gt;Person&lt;/span&gt;&lt;span class="o"&gt;({&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;});&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will yield a compilation error. &lt;code&gt;middleName&lt;/code&gt; is non-nullable but the constructor will set it to null if value is not passed. This is not acceptable. You can use the &lt;code&gt;required&lt;/code&gt; keyword to indicate that &lt;code&gt;middleName&lt;/code&gt; is a required non-nullable positional parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="n"&gt;Person&lt;/span&gt;&lt;span class="o"&gt;({&lt;/span&gt;&lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;});&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  late keyword
&lt;/h2&gt;

&lt;p&gt;Not all non-nullable fields may be initialized in the constructor. The analyzer cannot prove that the value will be set before access. Such as the following case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="n"&gt;assignRandomMiddleName&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;middleName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'Wilcox'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;middleAllCaps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toUpperCase&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;Person&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Person&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;assignRandomMiddleName&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;middleAllCaps&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In such cases, you should not force override using &lt;code&gt;!&lt;/code&gt; but rather tell Dart that the variable will be initialized before reading the value like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;late&lt;/span&gt; &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;middleName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you set the &lt;code&gt;late&lt;/code&gt; keyword and do not initialize it, you won't get a nullError but rather a &lt;code&gt;LateInitializationError&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Uncaught Error: LateInitializationError: Field 'middleName' has not been initialized.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  How to enable null safety
&lt;/h1&gt;

&lt;p&gt;Null safety is still experimental. Library developers have already started pushing out null safe versions but you can continue run your apps without switching over.To opt-in to null safety for your project make sure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have dart version 2.10+&lt;/li&gt;
&lt;li&gt;All dependencies are null safe&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To run an app using null safety on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dart &lt;span class="nt"&gt;--enable-experiment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;non-nullable &lt;span class="nt"&gt;--no-sound-null-safety&lt;/span&gt; bin/myapp.dart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To enable your IDE to provide null safety errors you have to modify your &lt;code&gt;analysis_options.yaml&lt;/code&gt; and add the following options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Defines a default set of lint rules enforced for&lt;/span&gt;
&lt;span class="c1"&gt;# projects at Google. For details and rationale,&lt;/span&gt;
&lt;span class="c1"&gt;# see https://github.com/dart-lang/pedantic#enabled-lints.&lt;/span&gt;
&lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;package:pedantic/analysis_options.yaml&lt;/span&gt;

&lt;span class="c1"&gt;# For lint rules and documentation, see http://dart-lang.github.io/linter/lints.&lt;/span&gt;
&lt;span class="c1"&gt;# Uncomment to specify additional rules.&lt;/span&gt;
&lt;span class="c1"&gt;# linter:&lt;/span&gt;
&lt;span class="c1"&gt;#   rules:&lt;/span&gt;
&lt;span class="c1"&gt;#     - camel_case_types&lt;/span&gt;

&lt;span class="na"&gt;analyzer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enable-experiment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;non-nullable&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  How do I make my code null safe
&lt;/h1&gt;

&lt;p&gt;dart comes with a migration tool built-in. More options are available on migration in the dart docs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dart migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>dart</category>
      <category>beginners</category>
      <category>flutter</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How I got used to vim (neovim)</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Sun, 22 Nov 2020 06:47:27 +0000</pubDate>
      <link>https://dev.to/arifamirani/how-i-got-used-to-vim-neovim-3jpk</link>
      <guid>https://dev.to/arifamirani/how-i-got-used-to-vim-neovim-3jpk</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;TLDR; I got faster at writing (code and articles) once I stopped fighting the editor. My journey from a mouse dependent editor to a keyboard one.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As an active developer, I constantly strive to optimize my workflows. Anything that is repetitive must be automated to the nth degree. All actions that are pure, that is to say, yield the same result if the inputs are same are easy to do.&lt;/p&gt;

&lt;p&gt;I have an elaborate zshrc (based on oh-my-zsh), Mac Automator actions, Alfred workflows to generate UUIDs or query data from my production database. These little gems save a ton of time and ensure no human errors are made.&lt;/p&gt;

&lt;p&gt;Most of my time is spent in an editor writing code, articles, reports, etc. In all these formats, a significant chunk is navigation. I seldom write new code and I will review articles or rewrite them a bit. Any developer will tell you that code writing is jumping to locations (jump to definition, symbol and back) and then adding more code.&lt;/p&gt;

&lt;p&gt;Since the past decade, I've been using IDEs for code writing and text editors like Sublime Text for articles and standalone scripts. Keeping aside the issue of resource usage, IDEs do a good job of allowing you to write code faster. When it comes refactoring IDEs are a boon. Having said that, they fundamentally do not solve the problem of speed/efficiency. In other words, I was still performing more keystrokes to do the same repetitive task. I saw myself constantly shuffling between the keyboard and mouse to get navigation or editing done. Shortcuts are good but they don't work across different editors. I spent a lot of time trying to configure each editor the same way.&lt;/p&gt;

&lt;p&gt;My goal was to write faster irrespective of the content type.&lt;/p&gt;

&lt;p&gt;Line or terminal based editors have always been a holy grail of efficiency for me. For the simple reason that they do not assume a pointer device, all actions are fine tuned to be done from a keyboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgzn68y7jx41d33dc5q0h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgzn68y7jx41d33dc5q0h.jpg" alt="Vim Keyboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://catonmat.net/why-vim-uses-hjkl-as-arrow-keys" rel="noopener noreferrer"&gt;Source catonmat.net&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;catonmat explains the design of keys in this &lt;a href="https://catonmat.net/why-vim-uses-hjkl-as-arrow-keys" rel="noopener noreferrer"&gt;article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Emacs and vim both offer the same promise, get efficient at editing via keyboard. I tried using Emacs for almost 3 months a few years ago. A half-hearted attempt that did not result in anything worth mentioning.&lt;/p&gt;

&lt;p&gt;Vim however, has been my go to editor when logging into terminals. Whenever a config needed tweaking, I'd fire up vim. Vim ubiquity made it so that I never lost touch. Full time usage is a whole different thing though.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;TLDR; Vim (or Emacs) improves your navigation speed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At some point, I started off with gusto to move lock, stock and barrel to vim. That attempt sorely failed. I was expecting workflows to work. The basic &lt;code&gt;hjkl&lt;/code&gt; keys I had learnt, I thought would be enough. My productivity plummeted and I blamed on how difficult vim is.&lt;/p&gt;

&lt;p&gt;My mistake was that I conflated the two problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I want write faster&lt;/li&gt;
&lt;li&gt;I want to replace my IDE&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Several months ago, I revisited my editor needs. I spoke to people who do use vim and started to focus on the first problem. Focusing on speed removed the friction and mental block of comparing output. I started doing non code activities on vim like articles or JSON formatting, etc. &lt;/p&gt;

&lt;p&gt;Changes since my last attempt also made life easier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://neovim.io/" rel="noopener noreferrer"&gt;Neovim&lt;/a&gt; got LSP support and with coc I got basic JSON editing working better than Sublime&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/channel/UC8ENHE5xdFSwx71u3fDH5Xw" rel="noopener noreferrer"&gt;ThePrimeagen&lt;/a&gt; Vim coding sessions gave me insight into practical keystrokes that help in efficient writing&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/akiyosi/goneovim" rel="noopener noreferrer"&gt;GoNeovim&lt;/a&gt; - a GUI based neovim instance cause I hate having a tab in iterm for my editor (personal quirk)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How did Vim increase my writing speed
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;My hands stay on the keyboard 90% of the time&lt;/li&gt;
&lt;li&gt;I do minimal wrist position changes to get anything out of the current window, be it manipulating text, navigating, performing terminal actions&lt;/li&gt;
&lt;li&gt;Vim motions change the way you view existing text on the screen&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Vim motions
&lt;/h2&gt;

&lt;p&gt;Vim motions need their own section. Motions take some time getting used to. Once you understand what motions want you to view text as they become significantly simpler to work with. With motions I started seeing text as markers. I won't get into the motions tutorial, the &lt;a href="http://vimdoc.sourceforge.net/htmldoc/motion.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; does a great job of explaining each one.&lt;/p&gt;

&lt;p&gt;Motions in itself reduced time spent in micro context switches of navigation. A simple motion of &lt;code&gt;Shift + [&lt;/code&gt; meant moving up a object block. In 90% of the cases I'd land on the right place.&lt;/p&gt;

&lt;p&gt;The curve below was accurate for me. My output decreased but for non critical tasks. That helped me stay on course to keep trying to learn.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fweb3ukdzzpwc6j5gw1q4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fweb3ukdzzpwc6j5gw1q4.png" alt="Vim Learning Curve"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Keymaps
&lt;/h2&gt;

&lt;p&gt;Vim comes with a usable set of maps that you don't need to change. To make vim work for you though I've seen every single true vim developer add keymaps. They are shortcuts that you create which work only for you. I've tried so many times copying keymaps from a 3k+ stars github repo only to delete it with prejudice.&lt;/p&gt;

&lt;p&gt;Keymaps make vim even better. My experience has been to keep trying different options. Some of my milestones have been:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Moved leaderkey twice - &lt;code&gt;comma&lt;/code&gt;, &lt;code&gt;space&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Not using Meta keys (Alt/Cmd) - Now I do everywhere&lt;/li&gt;
&lt;li&gt;Tried to use plugin defaults - Now I disable them&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Moving on to code editing
&lt;/h2&gt;

&lt;p&gt;With LSP and neovim, there was enough to move smaller projects into vim. I still don't use vim for large/complex projects that require IDE capabilities. I find the required plumbing an absolute overkill.&lt;/p&gt;

&lt;p&gt;But that's not it!&lt;/p&gt;

&lt;p&gt;I use Intellij which comes with &lt;a href="https://github.com/JetBrains/ideavim" rel="noopener noreferrer"&gt;IdeaVim&lt;/a&gt;. It gets me 80% of the way. All motions work, keymaps work. Whatever I use CoC for, is built into Idea. I had to tweak Idea to get the keymaps working identically to nvim. IdeaVim is also constantly improving; providing access to Idea IDE actions via keymaps which makes my life a lot easier.&lt;/p&gt;

&lt;p&gt;Neovim is embeddable and I'm waiting for it to be incorporated for true vim experience. VSCode already does that with &lt;a href="https://github.com/VSCodeVim/Vim" rel="noopener noreferrer"&gt;VSCodeVim&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I improve with Vim
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Read blogs and watch videos (at 1.5x) to find better ways

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/channel/UC8ENHE5xdFSwx71u3fDH5Xw" rel="noopener noreferrer"&gt;ThePrimeagen&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/channel/UCS97tchJDq17Qms3cux8wcA" rel="noopener noreferrer"&gt;ChrisAtMachine&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;a href="https://github.com/topics/dotfiles" rel="noopener noreferrer"&gt;Github dotfiles&lt;/a&gt; - Scour github.com for new nvim configs&lt;/li&gt;

&lt;li&gt;

&lt;a href="https://vimawesome.com/" rel="noopener noreferrer"&gt;Vimawesome&lt;/a&gt; - a great resource to find the latest and greatest packages&lt;/li&gt;

&lt;li&gt;

&lt;a href="https://vimawesome.com/?q=tag:coc.nvim" rel="noopener noreferrer"&gt;Coc plugins&lt;/a&gt; - I love CoC. They keep dishing out new plugins for usecases&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I still do wrong (as others may say it)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use arrow keys - Some blogs make me feel guilty but I understand why&lt;/li&gt;
&lt;li&gt;Limit nvim to only some usecases&lt;/li&gt;
&lt;li&gt;Run nvim in a GUI window - So many people have told me to use terminal&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It's been a great journey so far. Using vim on a regular basis for 90% of my writing activities. I have seen visible improvement in my output and now editing is not the biggest time drain of my day.&lt;/p&gt;

</description>
      <category>vim</category>
      <category>neovim</category>
      <category>ide</category>
      <category>editor</category>
    </item>
    <item>
      <title>Nova Editor by Panic - A Quick Review</title>
      <dc:creator>Arif Amirani</dc:creator>
      <pubDate>Sun, 22 Nov 2020 06:45:35 +0000</pubDate>
      <link>https://dev.to/arifamirani/nova-editor-by-panic-a-quick-review-3gp2</link>
      <guid>https://dev.to/arifamirani/nova-editor-by-panic-a-quick-review-3gp2</guid>
      <description>&lt;p&gt;Over the years the choices of IDEs and code editors have reduced significantly. We earlier had quite a few options across OSes from BBEdit, Notepad++, Eclipse, etc. However now with the need for advanced IDEs there are only a handful picks that work across all OSes, and that is great! We have contenders in both the open source (VS Code, Vim, etc) and commercial (Intellij, Sublime Text, etc) spaces. These are all amazing editors and IDEs but when it comes to native ones that are fast and native the choices are quite limited.&lt;/p&gt;

&lt;p&gt;I've been a fan of Panic since I started using a Mac professionally about a decade ago. Transmit has been a god send when dealing with file transfers and they've improved it over the years to work seamlessly with cloud platforms. That's not the best part, Panic does an amazing job adhering to and most of the times improving the Apple Human Interface guidelines. If a product is coming out of the Panic stable, it's virtually guaranteed to be aesthetically pleasing and conforming to the Mac experience.&lt;/p&gt;

&lt;p&gt;So when Nova was announced, I was unsurprisingly excited to try it out. Unfortunately I did not get a chance at the beta and eventually my enthusiasm died out as I went back into the traditional IDE world.&lt;/p&gt;

&lt;p&gt;Recently though I had a chance to get my hands on Nova and I was thoroughly excited to try it out. The below review points are based on my usage for a few days using it for a relatively small Typescript project.&lt;/p&gt;

&lt;p&gt;To set the context straight, I work on a Macbook Pro 2019 with 16GB RAM and a touch bar (Almost never use it). I have two code editors that I use daily:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intellij Idea Ultimate as my IDE for Python, Flutter, Web&lt;/li&gt;
&lt;li&gt;Neovim (goneovim) as my code editor for Web, Go, articles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The distinction is important because I crave performance, speed and efficiency when coding as much as possible, but also need the expansive IDE experience for refactoring, wiring, etc which is too cumbersome with a deeper integration.&lt;/p&gt;

&lt;p&gt;With the advent of LSPs (&lt;a href="https://microsoft.github.io/language-server-protocol/"&gt;https://microsoft.github.io/language-server-protocol/&lt;/a&gt;), even editors can now provide features of an IDE out-of-the-box. The trick is presenting those features in a seamless manner.&lt;/p&gt;

&lt;p&gt;VIM has LSP support but there is a lot of fumbling needed to get it to work correctly although coc (which I use) does a pretty decent job.&lt;/p&gt;

&lt;p&gt;Coming back to Nova, my expectations from Nova are that of an IDE. Something that goes beyond the typical editor experience whilst retaining all of the goodness of a native app from Panic (native look and feel, speed, chock full of features) and Nova does not disappoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial setup
&lt;/h3&gt;

&lt;p&gt;Installation of Nova is pretty straightforward, download the dmg from &lt;a href="https://nova.app/"&gt;https://nova.app/&lt;/a&gt; and install it. The default configuration works really well however I was quick to change the font to Fira Code and higher point size.&lt;/p&gt;

&lt;p&gt;Nova is geared more towards web development out-of-the-box so you'll get support for CSS, HTML, Javascript languages. Being an Applesque app, it shares commonality with Xcode and gives you options to start projects or simply open directories  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--243lT9Ac--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/il3q2h74g4eg3nzo3pp0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--243lT9Ac--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/il3q2h74g4eg3nzo3pp0.png" alt="Initial startup"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can open single files and get editing right away.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration
&lt;/h3&gt;

&lt;p&gt;Configuring Nova can be a bit jarring if you come from an IDE. The nomenclature is a bit different alongwith the organization of options (more Xcode). There is no search provided for so many settings which I am used to with Intellij/VSCodium (yay! no telemetry). It does take time to get things setup the way one likes and it does not follow the dark theme.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9FB_jmtj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8kxsfgd04yj3w609lixs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9FB_jmtj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8kxsfgd04yj3w609lixs.png" alt="Nova configuration"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Compare this to Intellij settings&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1FxlMTXc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b2f9uf1hcmk6s0karkxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1FxlMTXc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b2f9uf1hcmk6s0karkxc.png" alt="Intellij configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Extensions
&lt;/h3&gt;

&lt;p&gt;Nova supports a ton of extensions and alongwith a rich API that will allow you to work with a myriad of languages, themes, etc. I believe the choice of available extensions is based on popularity. For e.g. all the popular themes on Vim, Idea are available but none of the key bindings. If your language is supported good for you, mine wasn't (Dart).&lt;/p&gt;

&lt;p&gt;The experience of working with extensions is great. A simple search and install feature and off you go. More extensions are being made available regularly and the ecosystem will get richer. How much of this is Panic's push? No idea.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TVuhz_Eh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eqc4xllozqye2qsz7rx7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TVuhz_Eh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eqc4xllozqye2qsz7rx7.png" alt="Nova extensions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Editor &amp;amp; Experience
&lt;/h3&gt;

&lt;p&gt;You got one job Nova! Code editing is generally good with lots of panes and information floating around to work with. The command palette similar to Sublime Text, Idea, VSCode gives you quick access to functionality. The default keybindings are bit off putting especially coming from Idea/Vim/VSCode.&lt;/p&gt;

&lt;p&gt;You get the standard code folding, structure navigation, etc similar to others.&lt;/p&gt;

&lt;p&gt;Even with support for LSP, the code actions are limited and does not allow a lot of refactoring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BuBmABVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gmeqy72q2dg5dw94a8hj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BuBmABVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gmeqy72q2dg5dw94a8hj.png" alt="Nova code actions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;del&gt;Another gripe is that basic string manipulation is not available out-of-the-box such as duplicate line (WAT!). We need an extension to get that working (DOUBLE WAT!)&lt;/del&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is not correct, nova has included this since v1, the problem is that they decided to name it "Copy lines Down and Copy Lines Up" the guy who did that extension did not find it either. - &lt;a href="https://www.reddit.com/r/macapps/comments/jy8w9p/nova_editor_by_panic_a_quick_review/gd5an8x?utm_source=share&amp;amp;utm_medium=web2x&amp;amp;context=3"&gt;imustknowsomething&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---hCQkmca--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ro2x4eu8ez46ugtqaony.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---hCQkmca--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ro2x4eu8ez46ugtqaony.png" alt="Nova Duplicate Line functionality"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflows
&lt;/h3&gt;

&lt;p&gt;Workflows is similar to Run actions that others provide. Honestly I didn't get time to explore it much and I don't need it a lot too. I prefer to execute workflows via the terminal. I am sure there is good stuff in there.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Cool Parts
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Built-in Git support
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RwyQajFE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tausyqhkr8x526bk7r1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RwyQajFE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tausyqhkr8x526bk7r1q.png" alt="Nova Git Support"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Publish/Remote servers
&lt;/h4&gt;

&lt;p&gt;This is where Nova shines. It has Transmit kinda built-in. It allows you to push content to remote servers which they support a lot of.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pEdq67fp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wfzyf5covskd0iewaw5b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pEdq67fp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wfzyf5covskd0iewaw5b.png" alt="Nova Remote support"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing
&lt;/h3&gt;

&lt;p&gt;Nova has a steep price point of $99 USD with each additional year at $49 USD especially when you consider open source alternatives like VSCode. Having said that, in some way this price is justified with the overall experience and the potential of Nova improving rapidly. Their robust API gives me a lot of hope that the language support and features will only get better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;I went in thinking I'll get a cool new IDE but I believe Nova can be best termed as a code editor on steriods that looks pretty (and this is important cause you spend 8 - 12 hours on this screen).&lt;/p&gt;

&lt;p&gt;Folks in web development should definitely try it and consider it for themselves. If you're working in Android, Flutter, etc then you may need a full fledged IDE where I can recommend Intellij Idea and VSCode.&lt;/p&gt;

</description>
      <category>nova</category>
      <category>panic</category>
      <category>editor</category>
      <category>ide</category>
    </item>
  </channel>
</rss>
