<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Deepak Gupta</title>
    <description>The latest articles on DEV Community by Deepak Gupta (@deegupta123).</description>
    <link>https://dev.to/deegupta123</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/deegupta123"/>
    <language>en</language>
    <item>
      <title>AWS IAM Management Best Practices: Structured Approaches for Scalable Access Control</title>
      <dc:creator>Deepak Gupta</dc:creator>
      <pubDate>Mon, 16 Jun 2025 07:58:48 +0000</pubDate>
      <link>https://dev.to/deegupta123/aws-iam-management-best-practices-structured-approaches-for-scalable-access-control-5djk</link>
      <guid>https://dev.to/deegupta123/aws-iam-management-best-practices-structured-approaches-for-scalable-access-control-5djk</guid>
      <description>&lt;p&gt;Today, let’s explore some practical approaches and best practices for managing AWS IAM. These recommendations aren’t mandatory but serve as strong foundational methods to structure both new and existing access management systems effectively.&lt;/p&gt;

&lt;p&gt;To understand the core components and the importance of managing IAM beyond surface-level permissions please refer to [&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Let’s dive into common patterns that are often overlooked, their limitations, and how to improve upon them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Avoid Attaching Policies Directly to IAM Users&lt;/strong&gt;&lt;br&gt;
While AWS allows policies to be attached directly to IAM users, it’s not considered a best practice. Why?&lt;/p&gt;

&lt;p&gt;Limitations:&lt;/p&gt;

&lt;p&gt;AWS enforces a quota: only 10 managed policies (AWS-managed or customer-managed) can be directly attached to a user.&lt;/p&gt;

&lt;p&gt;Even if raised via a support ticket, this can only be increased to 20, which isn’t scalable in larger environments.&lt;/p&gt;

&lt;p&gt;Better Alternatives:&lt;/p&gt;

&lt;p&gt;IAM Groups: Assign users to groups and attach policies to those groups. A user can be part of up to 10 groups, each of which can have 10 attached policies. This drastically increases policy flexibility and improves organization.&lt;/p&gt;

&lt;p&gt;Inline Policies: These are tightly coupled with a specific IAM resource (user, role, or group). Although inline policies are not reusable, they offer flexibility when attaching unique or highly specific policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Group-Based Approach for Better Manageability&lt;/strong&gt;&lt;br&gt;
Beyond quota limitations, user-based policy assignments are harder to audit and manage.&lt;/p&gt;

&lt;p&gt;Example Use Case:&lt;br&gt;
Imagine needing to revoke a specific policy from several users at once. With direct attachments, you’ll need to update each user manually. With a group-based model, a single change at the group level can update permissions across multiple users.&lt;/p&gt;

&lt;p&gt;Groups also allow you to categorize users by teams, environments, or responsibilities, making your access control setup more logical and maintainable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Use Naming Conventions Wisely&lt;/strong&gt;&lt;br&gt;
While not enforced by AWS, standardized naming conventions are a powerful organizational tool.&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;p&gt;Makes it easy to identify the purpose of a policy, role, or group at a glance.&lt;/p&gt;

&lt;p&gt;Helps in filtering, sorting, and troubleshooting.&lt;/p&gt;

&lt;p&gt;Simplifies tracking temporary or environment-specific resources.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;p&gt;EC2Access-Prod-TeamA&lt;/p&gt;

&lt;p&gt;S3ReadOnly-QA-temp-20240601&lt;/p&gt;

&lt;p&gt;Naming Limits:&lt;/p&gt;

&lt;p&gt;IAM resource names are limited to 128 characters.&lt;/p&gt;

&lt;p&gt;Allowed characters include alphanumerics and a few special ones: +, =, ,, ., @, _, -.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Leverage Tags for Dynamic Filtering&lt;/strong&gt;&lt;br&gt;
Tags act as metadata in key-value format and are natively supported in AWS IAM.&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;p&gt;Easily filter and group IAM users, roles, policies, and groups.&lt;/p&gt;

&lt;p&gt;Enable cost tracking, auditing, and environment segregation in dashboards like CloudWatch.&lt;/p&gt;

&lt;p&gt;Common Tags:&lt;/p&gt;

&lt;p&gt;env=production&lt;/p&gt;

&lt;p&gt;team=frontend&lt;/p&gt;

&lt;p&gt;type=temporary&lt;/p&gt;

&lt;p&gt;createdBy=automation&lt;/p&gt;

&lt;p&gt;Tag Limits:&lt;/p&gt;

&lt;p&gt;Up to 50 tags per IAM resource.&lt;/p&gt;

&lt;p&gt;Keys: 128-character limit&lt;/p&gt;

&lt;p&gt;Values: 256-character limit&lt;/p&gt;

&lt;p&gt;Tags are especially helpful when naming conventions fall short, or when dynamic tracking and automation is involved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary of Key Practices&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Practice&lt;/th&gt;
&lt;th&gt;Why It Matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Avoid direct user policy attachments&lt;/td&gt;
&lt;td&gt;Limited scalability, difficult to manage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use IAM Groups&lt;/td&gt;
&lt;td&gt;Simplifies bulk permission changes, scalable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Apply Inline Policies selectively&lt;/td&gt;
&lt;td&gt;Useful for tightly scoped, unique access needs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow Naming Conventions&lt;/td&gt;
&lt;td&gt;Improves readability, filtering, and governance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use Tags&lt;/td&gt;
&lt;td&gt;Enhances resource categorization and automation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are some of the foundational best practices to consider when managing AWS IAM. While AWS provides the tools, how you use them defines your security posture, team efficiency, and scalability.&lt;/p&gt;

&lt;p&gt;These strategies may not be mandatory, but they lay the groundwork for secure, flexible, and audit-friendly IAM architectures.&lt;/p&gt;

&lt;p&gt;In the upcoming blog, we’ll dive into policy types, real-world use cases, and advanced IAM management techniques for large-scale setups.&lt;/p&gt;

&lt;p&gt;Have a tip, lesson, or story around IAM? Share it in the comments—we’d love to learn how you’ve tackled IAM at scale!&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #awscommunity #security #iam #enterprise #bestpractices #roles #policies
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>iam</category>
      <category>security</category>
    </item>
    <item>
      <title>Docker BuildKit : Faster Builds, Mounts and Features</title>
      <dc:creator>Deepak Gupta</dc:creator>
      <pubDate>Mon, 16 Jun 2025 07:50:54 +0000</pubDate>
      <link>https://dev.to/deegupta123/docker-buildkit-faster-builds-mounts-and-features-4nkc</link>
      <guid>https://dev.to/deegupta123/docker-buildkit-faster-builds-mounts-and-features-4nkc</guid>
      <description>&lt;p&gt;It was like any other day working on micro-services project, running on Docker environment. In general, we’ve had worked on making our Image Builds more efficient, secure, and faster following basic aspects that significantly affect building and working with Docker.&lt;/p&gt;

&lt;p&gt;Understanding Docker layers and structuring the Dockerfile to maximize their efficiency.&lt;br&gt;
Reducing the weight of the Docker image, by being specific about our Base Image Tags which comes up with minimal packages.&lt;br&gt;
Bringing the multi-stage builds concept, etc.&lt;br&gt;
But keeping the spirits of being highly productive and improving more, I landed upon Docker BuildKit. Docker BuildKit is the next generation container image builder, which helps us to make Docker images more efficient, secure, and faster. It has been lingering in the background of Docker builds for some time. Moreover to enable and unleash some massive performance is to set the DOCKER_BUILDKIT=1 environment variable when invoking the docker build command, such as:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ DOCKER_BUILDKIT=1 docker build .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk through with some of its powerful features which I have explored and came up to these results as below :&lt;/p&gt;

&lt;p&gt;Parallelism&lt;br&gt;
Considering a Multi-stage Build, when BuildKit comes across a multi-stage build it get concurrency, it analyzes the docker file and create a graph of the dependencies between build steps and uses this to determine which elements of the build can be ignored; which can be executed in parallel; and which need to be executed sequentially. For example, the stages as build 1 and build 2 can be run in parallel as they don’t depend on each other however the final build stage depends on the other two so it will be build afterwards when both the other two stages are completed.&lt;/p&gt;

&lt;p&gt;`FROM alpine AS build1&lt;br&gt;
RUN touch /tmp/dee.txt&lt;br&gt;
RUN sleep 10&lt;/p&gt;

&lt;p&gt;FROM alpine AS build2&lt;br&gt;
RUN touch /tmp/sandy.txt&lt;br&gt;
RUN sleep 10&lt;/p&gt;

&lt;p&gt;FROM alpine AS final&lt;br&gt;
COPY --from=build1 /tmp/dee.txt /tmp/&lt;br&gt;
COPY --from=build2 /tmp/sandy.txt /tmp/`&lt;/p&gt;

&lt;p&gt;And for more checks I’ve added a delay of 10s to the build 1 and build 2 stages. When using the legacy build engine it executes top to bottom so it will execute two separate sleep commands of 10s which accumulates to a wait time of 20s, nevertheless using BuildKit it will execute both the sleep commands at the same time and therefore only accumulate a sleep time of 10s.&lt;/p&gt;

&lt;p&gt;Hence, the standard Docker comes without concurrency in which build command performs builds on Dockerfile sequentially, which means it reads and builds each line or layer of the Dockerfile at a time and as a result it took 49.135s. Whereas BuildKit, allows for parallel build processing resulting in better performance and faster build times thus it only took 27.2s to build it.&lt;/p&gt;

&lt;p&gt;Build Secrets Volumes&lt;br&gt;
Sometimes we need some secret keys or password to run our build, just like credentials to access AWS S3 repository. In classic Docker builds there is no good way to do this; the obvious methods are insecure, and the workarounds are hacky. Therefore, BuildKit adds support for securely passing build secrets, which allows build container to access secure files such as secret access keys without baking them into the image.&lt;/p&gt;

&lt;h1&gt;
  
  
  syntax = docker/dockerfile:1.2
&lt;/h1&gt;

&lt;p&gt;`FROM python:3&lt;/p&gt;

&lt;p&gt;RUN pip install awscli&lt;/p&gt;

&lt;p&gt;RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://walletprodtest/terraform ./ --recursive`&lt;/p&gt;

&lt;p&gt;To build the image,&lt;/p&gt;

&lt;p&gt;$ &lt;code&gt;DOCKER_BUILDKIT=1 docker build --secret id=aws,src=/root/.aws/credentials .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The way BuildKit secrets work is that a file with the secret gets mounted to a temporary location during the RUN command, e.g. /root/.aws/credentials. Since, it’s only mounted during a particular RUN command, it doesn’t end up embedded in the final image.&lt;/p&gt;

&lt;p&gt;BuildKit mount types doesn’t end only with secret, we have few more :&lt;/p&gt;

&lt;p&gt;Cache Mount : Sick of re-downloading all external dependencies every time when there’s a change to only one of them, the cache mount can help us save time in the future. Inside of our Dockerfile, add a mount flag, specifying which directories should be cached during the step.&lt;br&gt;
&lt;code&gt;RUN --mount=type=cache,target=/var/lib/apt/lists&lt;/code&gt; …&lt;/p&gt;

&lt;p&gt;SSH Mount : This mount type allows the build container to access SSH keys via SSH agents, with support for passphrases.&lt;br&gt;
&lt;code&gt;RUN --mount=type=ssh … etc.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Using these mount types features, makes it easier for us to pass build-time secrets, handle SSH credentials, and cache directories in between builds even if the layer needs to be rebuilt.&lt;/p&gt;

&lt;p&gt;Remote Cache&lt;br&gt;
This feature is an upgraded version from the --cache-from which was having problems such as images need to be pre-pulled, no support for multi-stage builds, etc. With BuildKit, you don’t need to pull the remote images before building since it caches each build layer in your image registry. Then, when you build the image, each layer is downloaded as needed during the build.&lt;/p&gt;

&lt;p&gt;For example, keeping the Dockerfile same as used in Build Secrets Mount, we build the image using,&lt;/p&gt;

&lt;p&gt;$ &lt;code&gt;DOCKER_BUILDKIT=1 docker build -t dgupta9068/demo --secret id=aws,src=/root/.aws/credentials --build-arg BUILDKIT_INLINE_CACHE=1 .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;$ &lt;code&gt;docker push dgupta9068/demo&lt;/code&gt;&lt;br&gt;
Now will make a small change in the Dockerfile by adding one more RUN command which will create a test directory. After this, prune the Docker system and try to rebuild the image using&lt;/p&gt;

&lt;p&gt;$ &lt;code&gt;DOCKER_BUILDKIT=1 docker build --cache-from dgupta9068/demo .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To use an image as a cache source, cache metadata needs to be written into the image on creation which was done by setting --build-arg BUILDKIT_INLINE_CACHE=1 when building the image.&lt;/p&gt;

&lt;p&gt;After that, for the second fresh image build, we used dgupta9068/demo image as cache source by which BuildKit automatically pulled up all the cached layers and just executed the last RUN layer of creating a directory. And hence, ended in a faster build of the Docker Image.&lt;/p&gt;

&lt;p&gt;In Conclusion&lt;br&gt;
I hope this article will give a kickstart to use and knowing more about BuildKit and its features which will help you to simplify and speed up your Docker build workflows. If you want to learn more about fasten-docker-build , besides BuildKit – do checkout !&lt;/p&gt;

&lt;p&gt;Till then keep building 🙂&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>containers</category>
      <category>containerapps</category>
    </item>
    <item>
      <title>Triggering Jenkins Jobs From Slack</title>
      <dc:creator>Deepak Gupta</dc:creator>
      <pubDate>Thu, 15 Oct 2020 05:19:15 +0000</pubDate>
      <link>https://dev.to/deegupta123/triggering-jenkins-jobs-from-slack-4n</link>
      <guid>https://dev.to/deegupta123/triggering-jenkins-jobs-from-slack-4n</guid>
      <description>&lt;p&gt;&lt;strong&gt;What about triggering your builds once you get an update from the Development Team on Slack and from their itself you triggered the job, so that all the needful can be done and reports added in your job configuration can be mailed back to developers!!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So here we go : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In Jenkins go to configure page and create API token, which helps you to configure your Job to Trigger Build remotely where you mention your token name.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fd9a1w3do44u94nhvzgdj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fd9a1w3do44u94nhvzgdj.PNG" alt="Alt Text" width="800" height="249"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjpshnbz0o1og2s1zylkj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjpshnbz0o1og2s1zylkj.PNG" alt="Alt Text" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Coming to slack, in apps search for Slash Commands where you can configure and add to trigger build of your job from slack&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxqx3mi9hf3gch6k7g1ts.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxqx3mi9hf3gch6k7g1ts.PNG" alt="Alt Text" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In configuration, choose a command name of your choice and then click on add slash command integration.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnay046to0iomf2j3wag9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnay046to0iomf2j3wag9.PNG" alt="Alt Text" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here you will make the entry in the URL mentioning your &lt;strong&gt;&lt;a href="https://jenkins-user:token_no@jenkins_url/job/job_name/build?token=token_name" rel="noopener noreferrer"&gt;https://jenkins-user:token_no@jenkins_url/job/job_name/build?token=token_name&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzo9y2brttcttkf65iqr8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzo9y2brttcttkf65iqr8.jpg" alt="Alt Text" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And at last, you can add an Autocomplete help task which helps you to find your added command easily while triggering, Click save. Go back to Slack and try run your command with the given name and your build will be triggered.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuj7fkhj72kalg7bp0rn3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuj7fkhj72kalg7bp0rn3.PNG" alt="Alt Text" width="800" height="376"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fq58fvgkodyjznsheiwwb.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fq58fvgkodyjznsheiwwb.PNG" alt="Alt Text" width="800" height="252"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxd7f0djsa32nitepqomb.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxd7f0djsa32nitepqomb.PNG" alt="Alt Text" width="705" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope You gonna try and make your builds easy to trigger. Keep Building&lt;/p&gt;

</description>
      <category>devops</category>
      <category>development</category>
      <category>ci</category>
      <category>slack</category>
    </item>
    <item>
      <title>Code Coverage</title>
      <dc:creator>Deepak Gupta</dc:creator>
      <pubDate>Mon, 21 Sep 2020 21:28:28 +0000</pubDate>
      <link>https://dev.to/deegupta123/code-coverage-2hci</link>
      <guid>https://dev.to/deegupta123/code-coverage-2hci</guid>
      <description>&lt;h1&gt;
  
  
  An introduction to code coverage
&lt;/h1&gt;

&lt;p&gt;In this blog, you'll learn what is code coverage, in what stages does it help us and what is the need of it??&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Coverage&lt;/strong&gt; is a parameter under software testing which tells you the no. of lines of code that are successfully validated under a test procedure, which in turn, help us in analyzing/verifying  the quality of our software code.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Calculate :
&lt;/h2&gt;

&lt;p&gt;Code Coverage Percentage = (Number of lines of code executed by a testing algorithm/Total number of lines of code in a system component) * 100.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code coverage criteria :
&lt;/h3&gt;

&lt;p&gt;1.Function Coverage -- The functions in the source code that are called and executed at least once.&lt;/p&gt;

&lt;p&gt;2.Statement Coverage -- The number of statements that have been successfully validated in the source code.&lt;/p&gt;

&lt;p&gt;3.Path Coverage -- The flows containing a sequence of controls and conditions that have worked well at least once.&lt;/p&gt;

&lt;p&gt;4.Branch or Decision Coverage -- The decision control structures (loops, for example) that have executed fine.&lt;/p&gt;

&lt;p&gt;5.Condition Coverage -- The Boolean expressions that are validated and that executes both TRUE and FALSE as per the test runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbt6q669ucl8d65tv18vl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbt6q669ucl8d65tv18vl.jpg" alt="Alt Text" width="260" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So does it mean higher the code-coverage % better is the code quality of your software program.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Noooh!!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we take 100% code coverage for any software program it says no bug, no defect and indicates that the test cases have covered all the criteria and requirements of software application.&lt;/p&gt;

&lt;p&gt;But in this case we are clueless and blank to evaluate if the test case has met all range of possibilities or not, did the test cases cover the incorrect requirements or might have missed any important ones??!!&lt;/p&gt;

&lt;p&gt;So if we see the software product is built on 100 % irrelevant code coverage then undoubtedly we're gonna compromise on quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then what should we focus on?? Our focus should be on writing test scripts that aren’t fuzzy. Don’t focus on achieving 100% coverage , focus on how the analysis can be clubbed out with your robust test scripts which are gonna cover every functional and non-functional area of source code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.atlassian.com/continuous-delivery/software-testing/code-coverage" rel="noopener noreferrer"&gt;Where I Read&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to connect and correct if wrong anywhere.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>devops</category>
      <category>codequality</category>
    </item>
    <item>
      <title>Branching Strategies</title>
      <dc:creator>Deepak Gupta</dc:creator>
      <pubDate>Fri, 18 Sep 2020 10:55:58 +0000</pubDate>
      <link>https://dev.to/deegupta123/branching-strategies-5hdi</link>
      <guid>https://dev.to/deegupta123/branching-strategies-5hdi</guid>
      <description>&lt;p&gt;Have new ideas but directly implementing them won’t bring the expected results always!!! Bugs , vulnerabilities can be anywhere so here BRANCHING STRATEGIES comes in the picture where different ideas and concepts are widely accepted and when all are tested and implemented on other environments then are finally merged to the master and released  for production.&lt;/p&gt;

&lt;p&gt;GIT FLOW : Firstly Designed by Vincent Driessen. In Driessen model ,there are 2 permanent branches :&lt;/p&gt;

&lt;p&gt;· Master Branch : From where software is released for production&lt;/p&gt;

&lt;p&gt;· Develop/Dev Branch : Branch where developers works on&lt;/p&gt;

&lt;p&gt;Other Non-Permanent branches :&lt;/p&gt;

&lt;p&gt;· Feature : Branch to develop new features&lt;/p&gt;

&lt;p&gt;· Release : Branch support preparation of new production release&lt;/p&gt;

&lt;p&gt;· Hotfix/Patch : Production issues branch that need immediate fix &lt;br&gt;
  before planned release.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faf9bayahhb40s5nrprh0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faf9bayahhb40s5nrprh0.PNG" alt="Alt Text" width="613" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the workflow initially starts from the single develop branch. Developers create feature branches for each new feature and group of features. We can name feature branches based on the feature we are working on. These branches are merged with the develop branch. Once a mass or set of related features are built and merged , a new release branch is created. Release branch is now tested and defects are logged which are fixed over here itself and merged back to the develop branch. Once the team is confirmed with the quality of the release is good, a new master branch is created from the release branch. The master branch is tagged with the version and deployed to production. If any defect or bug comes then a hotfix branch is created from master to fix all those issues and merged back into master and released.&lt;/p&gt;

&lt;p&gt;I hope this makes a bit of clear understanding. Feel free to connect and correct if wrong anywhere.&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>linux</category>
      <category>developers</category>
    </item>
    <item>
      <title>Security Matters!! ( SSL )</title>
      <dc:creator>Deepak Gupta</dc:creator>
      <pubDate>Tue, 15 Sep 2020 16:39:25 +0000</pubDate>
      <link>https://dev.to/deegupta123/security-matters-ssl-22g0</link>
      <guid>https://dev.to/deegupta123/security-matters-ssl-22g0</guid>
      <description>&lt;h1&gt;
  
  
  SSL CERTIFICATE
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Why and How???
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Short Introduction
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk9nmq6pdh7aqyz3ec6uc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk9nmq6pdh7aqyz3ec6uc.jpg" alt="Alt Text" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security Do Matter!!!&lt;br&gt;
As taking today’s real world scenario every other person, companies and organizations offer and do  more online services and transactions, internet security becomes both a priority and a necessity of their online transactions to ensure that sensitive information – such as a credit card number, data,personal informations – is only being transmitted to legitimate online businesses and the person who could access it.&lt;br&gt;
So,in order to keep customer information private and secure, companies and organizations need to add SSL certificates to their websites to enable secure online transactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are SSL Certificates and Why do I Need Them?&lt;/strong&gt;&lt;br&gt;
SSL certificates are an essential component of the data encryption process that make internet transactions secure.They are who, which provide authentication to protect the confidentiality and integrity of website communication with browsers. In simple words,&lt;br&gt;
1.SSL Certificate secures the data which is in transit between server and browser&lt;br&gt;
2.It keeps the information private and secure.&lt;/p&gt;

&lt;p&gt;The SSL certificate's job is to initiate secure sessions with the user’s browser via the secure sockets layer (SSL) protocol. This secure connection cannot be established without the SSL certificate, which digitally connects company information to a cryptographic key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9hf0idclzxkhr3aabvk2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9hf0idclzxkhr3aabvk2.jpg" alt="Alt Text" width="624" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How SSL Certificates Work&lt;/strong&gt;&lt;br&gt;
1.A browser or server attempts to connect to a website (i.e. a web server) secured with SSL. &lt;br&gt;
2.A copy of its SSL certificate is send by web server to the browser/server &lt;br&gt;
3.The browser/server checks whether to trust the SSL certificate or not. If so, it sends a message to the web server.&lt;br&gt;
4.The web server sends back a digitally signed acknowledgement to start an SSL encrypted session.&lt;br&gt;
5.And finally, encrypted data is shared between the browser/server and the web server.&lt;/p&gt;

&lt;p&gt;Benefits of Using SSL Certificates&lt;br&gt;
1.First of all it utilizes, HTTPs &lt;br&gt;
2.Safer experience&lt;br&gt;
3.Protect both customer and internal data and personal information&lt;br&gt;
4.Encrypt browser-to-server and server-to-server interaction and transmission&lt;br&gt;
5.Increase security&lt;/p&gt;

&lt;p&gt;Hope you got the basic understanding, feel free to connect and correct if wrong anywhere.&lt;/p&gt;

</description>
      <category>security</category>
      <category>linux</category>
      <category>devops</category>
    </item>
    <item>
      <title>Basic Linux Commands</title>
      <dc:creator>Deepak Gupta</dc:creator>
      <pubDate>Sat, 12 Sep 2020 18:17:45 +0000</pubDate>
      <link>https://dev.to/deegupta123/basic-linux-commands-3peh</link>
      <guid>https://dev.to/deegupta123/basic-linux-commands-3peh</guid>
      <description>&lt;p&gt;File Management Commands:&lt;br&gt;
1.ls (To list the files and directories stored in the current directory)&lt;br&gt;
The command ls supports the -l and –a option which would help you to get more information about the listed and hidden files&lt;br&gt;&lt;br&gt;
2.cd  ( To enter  into a directory)&lt;br&gt;
3.pwd  ( to show print working directory)&lt;br&gt;
4.mkdir ( to make a new directory)&lt;br&gt;
5.cat (to print the content of the file)&lt;br&gt;
6.head ( to print the lines from the top of a file, in default it prints 10 lines  can use –n flag to print respective number of lines)&lt;br&gt;
7.tail ( print lines from bottom of a file , similarly –n respective number of lines)&lt;br&gt;
8.touch ( to create a text file)&lt;br&gt;
9.rm  (to remove file) and rmdir (to remove a directory)&lt;br&gt;
These commands works when file or directory are empty. But if you want to forcefully remove a directory/file then use command&lt;br&gt;
rm –rf name of directory/file&lt;br&gt;
But be cautious while using this command as it completely delete the object. Preferably it’s not considered to be use. &lt;br&gt;
10.cp ( to copy a file from source destination to a different destination)&lt;br&gt;
             cp  /home/dee/file1.txt  /home/deepak&lt;br&gt;
                        (Source)      (Destination)&lt;br&gt;
11.mv  ( to move a file from source to destination or to rename)&lt;br&gt;
                  mv  /home/dee/file1.txt  /home/Deepak&lt;br&gt;
                           (Source)        (Destination)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     mv  /home/dee/file1.txt  /home/Deepak/files.txt
               (Old Name)             (New Name)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;File Permissions Command:&lt;/p&gt;

&lt;p&gt;Every file in Linux has the following attributes −&lt;br&gt;
• Owner permissions − The owner's permissions determine what actions the owner of the file can perform on the file.&lt;br&gt;
• Group permissions − The group's permissions determine what actions a user, who is a member of the group that a file belongs to, can perform on the file.&lt;br&gt;
• Other (world) permissions − The permissions for others indicate what action all other users can perform on the file.&lt;/p&gt;

&lt;p&gt;chmod o+wx testfile (permission given to other’s of write and execute for testfile)&lt;br&gt;
chmod u+rw testfile  (permission given to users of read and write for testfile)&lt;/p&gt;

&lt;p&gt;The second way to modify permissions with the chmod command is to use a number to specify each set of permissions for the file.&lt;/p&gt;

&lt;p&gt;0 || No permission || ---&lt;br&gt;
1 || Execute permission || --x&lt;br&gt;
2 || Write permission || -w-&lt;br&gt;
3 || Execute and write permission: 1(execute)+2(write)=3 || -wx&lt;br&gt;
4 || Read permission || r--&lt;br&gt;
5 || Read and execute permission: 4(read)+1(execute)=5 || r-x&lt;br&gt;
6 || Read and write permission: 4(read)+2(write)=6 || rw-&lt;br&gt;
7 || All permissions: 4(read)+2(write)+1(execute)=7 || rwx&lt;/p&gt;

&lt;p&gt;chmod 755 testfile &lt;/p&gt;

&lt;p&gt;2.setfacl &amp;amp; getfacl : The command "setfacl" refers to Set File Access Control Lists and "getfacl" refers to Get File Access Control List. Each file and directory in a Linux filesystem is created with a specific set of file permissions for its access. Each user can have different set of file access permissions.&lt;/p&gt;

&lt;p&gt;setfacl  -m  u:deepak:rwx  2.txt ( -m= modify , u=user )&lt;/p&gt;

&lt;p&gt;getfacl  2.txt&lt;/p&gt;

&lt;p&gt;Processes Management:&lt;/p&gt;

&lt;p&gt;1.ps (easy to see your own processes by running the ps (process status)&lt;br&gt;
Most used flag –f ( f for full) option, which provides more information&lt;/p&gt;

&lt;p&gt;2.top  (command is a very useful tool for quickly showing processes sorted by various criteria).&lt;br&gt;
It is an interactive diagnostic tool that updates frequently and shows information about physical and virtual memory, CPU usage, load averages, and your busy processes.&lt;/p&gt;

&lt;p&gt;3.kill (to kill the process running in background)]&lt;br&gt;
$kill 6738 ( 6738 Process ID)&lt;br&gt;
Terminated&lt;br&gt;
Or&lt;br&gt;
If a process ignores a regular kill command, you can use kill -9 followed by the process ID.&lt;/p&gt;

&lt;p&gt;Search Commands (Pipes and Filters)&lt;/p&gt;

&lt;p&gt;1.grep (to search for a particular string/word in a text file)&lt;/p&gt;

&lt;p&gt;Sr.No.        Option &amp;amp; Description&lt;br&gt;
1   -v  Prints all lines that do not match pattern.&lt;br&gt;
2   -n  Prints the matched line and its line number.&lt;br&gt;
3   -l  Prints the names of files with matching lines (letter "l")&lt;br&gt;
4   -c  Prints only the count of matching lines.&lt;br&gt;
5   -i  Matches either upper or lowercase.&lt;/p&gt;

&lt;p&gt;2.find (can be used to find files and directories and perform subsequent operations on them)&lt;/p&gt;

&lt;p&gt;find [where to start searching from] [expression determines what to find] [-options] [what to find]&lt;/p&gt;

&lt;p&gt;3.locate (used to find the files by name)&lt;/p&gt;

</description>
      <category>linux</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
