<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Saloni Singh</title>
    <description>The latest articles on DEV Community by Saloni Singh (@rksalo88).</description>
    <link>https://dev.to/rksalo88</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rksalo88"/>
    <language>en</language>
    <item>
      <title>Docker — Advanced Interview Questions</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Sun, 17 Nov 2024 15:25:43 +0000</pubDate>
      <link>https://dev.to/rksalo88/docker-advanced-interview-questions-6mn</link>
      <guid>https://dev.to/rksalo88/docker-advanced-interview-questions-6mn</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mqcqn1vgiibkkkqy6r7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mqcqn1vgiibkkkqy6r7.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In today’s time, to get into DevOps roles is a bit challenging, and Docker plays an important role here. So, today I have brought up some interesting advanced interview questions that you might encounter in future related to Docker.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;1. How does Docker ensure images are immutable?&lt;/strong&gt;&lt;br&gt;
Docker images are layer composites; each layer is read-only and immutable, while all modifications within a container end up in a new, writeable layer, without changing the image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. What is the difference between Docker’s bridge network and host network?&lt;/strong&gt;&lt;br&gt;
Bridge Networking Containers all communicate over private network bridges. Use: For those containers that have different networking needs. Host Networking Containers share a host’s network stack.&lt;br&gt;
Use: Low-latency networking (e.g., in high-performance apps).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. What happens to the data in a container when the container is deleted?&lt;/strong&gt;&lt;br&gt;
Unless a volume or bind mount is used, container data are stored in a writable layer, which is deleted when the container is removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. What are Docker namespaces, and how do they work?&lt;/strong&gt;&lt;br&gt;
Namespaces provide isolation in Docker by separating resources for each container:&lt;br&gt;
PID namespace: Process isolation.&lt;br&gt;
NET namespace: Network isolation.&lt;br&gt;
MNT namespace: Filesystem isolation.&lt;br&gt;
UTS namespace: Hostname isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. How do you debug a Docker container which is hogging too much CPU or Memory?&lt;/strong&gt;&lt;br&gt;
Use docker stats to monitor resource usage.&lt;br&gt;
Limit resources:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run - memory="500m" - cpus="1.5" &amp;lt;image&amp;gt;&lt;/code&gt;&lt;br&gt;
Check the app inside the container using tools like top or htop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. What are ways to shrink Docker images?&lt;/strong&gt;&lt;br&gt;
Use smaller base images (e.g., Alpine).&lt;br&gt;
Avoid installing unnecessary packages.&lt;br&gt;
Use multi-stage builds to exclude build dependencies.&lt;br&gt;
Cleanup temporary files in Dockerfile.&lt;br&gt;
Dockerfile:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RUN apt-get update &amp;amp;&amp;amp; apt-get install -y package &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How to automatically make Docker containers restart?
Use the — restart flag during container creation
no : Off, won’t restart.
always : Docker will restart the container, unless manually stopped.
on-failure : Docker will restart the container only if the exit code is not zero.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;docker run - restart=always &amp;lt;image&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. What is a dangling image? and how to remove it?&lt;/strong&gt;&lt;br&gt;
An dangling image is an image that does not have any tag.&lt;br&gt;
Clean up using&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker image prune&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. What are Docker tags and when are they useful?&lt;/strong&gt;&lt;br&gt;
Tags refer to versions of an image.&lt;/p&gt;

&lt;p&gt;For example &lt;code&gt;python:3.9 vs python:latest&lt;/code&gt;&lt;br&gt;
They offer versioning and ensures the same thing runs every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. What is Docker Content Trust (DCT), and how does it secure images?&lt;/strong&gt;&lt;br&gt;
DCT ensures that only signed images are pulled or run.&lt;/p&gt;

&lt;p&gt;Enable with&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export DOCKER_CONTENT_TRUST=1&lt;/code&gt;&lt;br&gt;
This verifies the integrity and publisher of images.&lt;/p&gt;

&lt;p&gt;These questions range from a series of practical and theoretical Docker concepts in order to assess their depth of understanding and problem-solving capacities of the candidate.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>interview</category>
      <category>help</category>
      <category>devops</category>
    </item>
    <item>
      <title>Transitioning your Career to AWS and DevOps</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Mon, 11 Nov 2024 03:39:43 +0000</pubDate>
      <link>https://dev.to/rksalo88/transitioning-your-career-to-aws-and-devops-4nkh</link>
      <guid>https://dev.to/rksalo88/transitioning-your-career-to-aws-and-devops-4nkh</guid>
      <description>&lt;p&gt;Nowadays, all of us are looking to transition our career in a more better way, probably Cloud or DevOps, as the demand is increasing in the market for these roles.&lt;/p&gt;

&lt;p&gt;So, this post is all about some steps that I would recommend you to follow, I’m sharing my personal experience and the steps I followed, it may differ from person to person, so please do the same according to your pace and interest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3aaz0w60hjyam8hgt54n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3aaz0w60hjyam8hgt54n.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Try to focus step by step on the services, don’t try to rush&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;First, if possible, cover AWS (or any other cloud of your choice) completely, only then move to DevOps&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you choose to go for AWS Developer roles, make sure you are really good with programming, and if you are going for Devops role, minimal programming is desired, but it is necessity, as there are many organisations, that expect you to code in interviews.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Programming language can be any of your choice, preferably Python or Golang if you think of moving towards DevOps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure, the services that you mention before the interviewer, you are really really good at them, please avoid bluffing about the services that you have very basic knowledge on, and if asked then you would be blank, as that creates an impression of not having enough knowledge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Try connecting with DevOps people and asked about their real projects, how they work and what all they cover.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid mentioning some common projects, like I have noticed in the interviews that I take, the candidate says, I have worked on an e-commerce website, or a retailing platform which is three tier application, I have created VPC, I created EC2, please note that real world projects aren’t this way. Try to state that we have managed the infra using IaC.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Focus on IaC tools, preferably Terraform would be great.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Be well prepared for the project you will be explaining, as that will be the one which interviewers would focus more on, so in and out you need to know everything about your project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mention working on tools like JIRA, ServiceNow, or any ticketing tools, preferably JIRA would be good one, and tell that you have also worked on stories and epics, this frames an impression of real hands-on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Be consistent regarding your hands-on practice, avoid mugging up internet questions and answers, as nowadays the interviewer asks questions related to your project and some scenario based ones, so your practice is the key to your success.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regarding DevOps, try to avoid covering each and every tool, if you just focus on Git, Docker and Kubernetes it’s more than enough.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At least one monitoring tool should be known, Datadog, Prometheus, Grafana or ELK, select any of your choice.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Watch Udemy lectures if possible, for AWS watch Stephane Maarek’s course on Solutions Architect Associate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regarding certifications, having one is good, but not necessary, at the end all that matters is your knowledge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For better insights on AWS and Kubernetes, I would recommend watching Cloud With Raj on Youtube, Raj is a Principle Solution Architect at AWS, the way he presents the things before you are good, also watch some of the system design cases as they are also the one’s that need focus.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;For more help connect with me on LinkedIn&lt;/strong&gt;: &lt;a href="https://www.linkedin.com/in/saloni-singh-aws/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/saloni-singh-aws/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>career</category>
      <category>careerdevelopment</category>
    </item>
    <item>
      <title>AWS Valkey: Now available on Amazon ElastiCache and MemoryDB!</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Mon, 04 Nov 2024 07:00:41 +0000</pubDate>
      <link>https://dev.to/rksalo88/aws-valkey-now-available-on-amazon-elasticache-and-memorydb-1kbj</link>
      <guid>https://dev.to/rksalo88/aws-valkey-now-available-on-amazon-elasticache-and-memorydb-1kbj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftunu1714kqebxdzmgqh4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftunu1714kqebxdzmgqh4.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
AWS has just added a new, high-power tool to the cloud for architects and developers who strive to make the most out of their performance and save costs: Valkey. This is an open-source, high-performance key-value store available on Amazon ElastiCache and MemoryDB, promising reliable and low-cost data management for ultra-fast data access by demanding applications. In this article, we will break down what are the key benefits from Valkey, give practical examples, and go further into why this new alternative is best suited for a lot of use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Valkey?&lt;/strong&gt;&lt;br&gt;
Administered by the Linux Foundation, Valkey reaps all the benefits offered by Redis OSS with utmost concern for cost efficiency as well as performance. Key datastore built as vendor neutral, Valkey is coming in as a game-changer in applications that require application speed and low latency at any affordable price point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;KEY HIGHLIGHTS:&lt;/strong&gt;&lt;br&gt;
Cost-Effectiveness: Amazon ElastiCache with Valkey gives 33% lower serverless options and 20% in node-based pricing compared with Redis OSS.&lt;br&gt;
High-Performance: The design at Valkey will allow speedy access to data, offering millisecond latency. Valkey is thus the ultimate solution for real-time apps.&lt;br&gt;
Flexibility: Simple migration from Redis OSS using minimal code changes; then you can benefit from Amazon’s strong, managed platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where Valkey Surpasses:&lt;/strong&gt; &lt;strong&gt;Real-World Use Cases&lt;/strong&gt;&lt;br&gt;
For displaying Valkey’s strengths, let’s talk about a few typical use cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Web application caching&lt;/strong&gt;&lt;br&gt;
An e-commerce site with tens of thousands of concurrent users is easy to imagine. Caching here is key for allowing smooth, high-speed usage, especially in the load spikes. The ElastiCache for Valkey should dramatically reduce database load since you can store session user data and high-access products which would allow them to surf seamlessly and faster check-out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Gaming leaderboard&lt;/strong&gt;&lt;br&gt;
Real-time, in-game leaderboards? Ideal. Sub-millisecond reads and consistent write performance make sure that players see up-to-the-minute rankings without delay — even in high-traffic scenarios. And at 30% cheaper than MemoryDB for Redis OSS, you can have the performance with control of the budget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Analytics for Real-Time Data&lt;/strong&gt;&lt;br&gt;
Valkey is a good fit for applications that process large volumes of real-time data such as monitoring and analytics applications. For instance, an analytics dashboard in a logistics application can use Valkey to manage and serve near real-time data on each vehicle’s location and status for immediate insights from clients.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started with Valkey on AWS&lt;/strong&gt;&lt;br&gt;
AWS easily transitions to Valkey fully managed. One may deploy Valkey now to the ElastiCache or MemoryDB by entering either into the AWS Console. On either, clicking will locate Valkey among its offered data stores and present for selection.&lt;br&gt;
With No Downtime: the preceding configurations and all previously learned access patterns continue with zero loss. That’s through AWS’s making possible of easy switching into Valkey without any losing any preceding configurations or already developed patterns of access established in those preceding configurations.&lt;br&gt;
Cost Management: Since Valkey is discounted versus Redis OSS, keep track of the cost savings with AWS Cost Explorer so you see the cost cuts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redis OSS vs. Valkey&lt;/strong&gt;&lt;br&gt;
For those who already work with Redis OSS, a low-cost, equal-performing alternative now comes up through Valkey:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lower Entry Price: **The pricing for ElastiCache for Valkey begins at $6 a month. There are also additional supported discounts for reserved nodes, for existing nodes.&lt;br&gt;
**High Availability:&lt;/strong&gt; Users can enjoy 99.99% availability and data replication and backup automatically by using AWS-managed reliability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Start Leveraging Valkey Today&lt;br&gt;
The release of Valkey on ElastiCache and MemoryDB has given the developers access to a high-performance, cost-effective key-value store backed by AWS’s reliability. It can either optimize for cost, performance, or be used for applications with demanding real-time requirements.&lt;/p&gt;

&lt;p&gt;Start discovering Valkey on ElastiCache and MemoryDB today to realize the potential of the power of a cost-effective, high-performance key-value store that scales with you.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>valkey</category>
      <category>news</category>
    </item>
    <item>
      <title>Latest Interview QnA faced recently as a DevOps Engineer</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Thu, 31 Oct 2024 18:29:55 +0000</pubDate>
      <link>https://dev.to/rksalo88/latest-interview-qna-faced-recently-as-a-devops-engineer-mm9</link>
      <guid>https://dev.to/rksalo88/latest-interview-qna-faced-recently-as-a-devops-engineer-mm9</guid>
      <description>&lt;p&gt;Hello Connections,&lt;/p&gt;

&lt;p&gt;I usually try attending interviews to bring latest questions for you, I recently attended an interview and here are some questions which I encountered:&lt;/p&gt;

&lt;p&gt;𝗕𝗿𝗮𝗻𝗰𝗵𝗶𝗻𝗴 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆:&lt;br&gt;
&lt;strong&gt;Ques&lt;/strong&gt;.: &lt;strong&gt;If we have some application being developed from scratch, what branching strategies would you suggest? What will be the questions you would ask?&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Ans&lt;/strong&gt;: For a greenfield application, it is generally recommended to use a trunk-based branching strategy where there is one central branch that contains the production-ready code. Developers are creating short-lived feature branches for new features or bug fixes and then merge these back into the main branch when completed. As the project grows, a GitFlow or GitHub Flow might be useful for managing features, releases, and hotfixes.&lt;/p&gt;

&lt;p&gt;Questions to Ask:&lt;br&gt;
Deployment Frequency: How often will the application be deployed?&lt;br&gt;
Team Size and Coordination: How many developers are working on the application? What communication style does people use?&lt;br&gt;
Release Management: Are there specific release cycles or timelines to be considered?&lt;br&gt;
CI/CD Pipeline: Will you do automated testing with CI/CD pipelines for merging and deployment of code?&lt;/p&gt;

&lt;p&gt;𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ques&lt;/strong&gt;: &lt;strong&gt;Let’s say I have two resource blocks, and each has a Bool value, one of it has True and another has False, I want to execute every block according to the value set, like if True, then first block, if False then second, how would that be possible?&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Ans&lt;/strong&gt;: Conditional Execution of Resource Blocks Based on Bool Values: To conditionally execute a block based upon a boolean you can use count with conditions:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;resource "aws_resource" "example" {&lt;br&gt;
  count = var.bool_value ? 1 : 0&lt;br&gt;
  # Resource configuration here&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Only the resource with &lt;code&gt;count = 1&lt;/code&gt; will be executed based on the value of the condition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Difference between for_each and count?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ans&lt;/strong&gt;: Below are the differences:&lt;br&gt;
&lt;code&gt;count&lt;/code&gt;: Multi-instance of a resource, based on an integer. Simple replication-friendly.&lt;br&gt;
&lt;code&gt;for_each&lt;/code&gt;: Iterate over a set like a map or list, creating unique resources per item. The for_each variable determines that the Resource has an unique identifier which is different for every item in it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. If I have two resource blocks, first block has 2 load balancers and second has 2 target groups, I wish to have my first load balancer get attached to first target group, and second load balancer with second target group, how would I do that dynamically, without hard coding anything?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ans&lt;/strong&gt;: This will be achieved by using for_each to iterate over each load balancer and mapping them to target groups, for example:&lt;/p&gt;

&lt;p&gt;`variable "load_balancers" {&lt;br&gt;
  type = list(string)&lt;br&gt;
  default = ["lb1", "lb2"]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_lb" "lb" {&lt;br&gt;
  for_each = toset(var.load_balancers)&lt;br&gt;
  # Load balancer configuration&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_lb_target_group" "tg" {&lt;br&gt;
  for_each = toset(var.load_balancers)&lt;br&gt;
  # Target group configuration&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_lb_target_group_attachment" "attachment" {&lt;br&gt;
  for_each = aws_lb.lb&lt;br&gt;
  target_group_arn = aws_lb_target_group.tg[each.key].arn&lt;br&gt;
  # Attach load balancer to target group&lt;br&gt;
}`&lt;br&gt;
All load balancers will dynamically attach to a specific target group without being hardcoded.&lt;/p&gt;

&lt;p&gt;𝗗𝗼𝗰𝗸𝗲𝗿:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have you ever worked with multiple images in file? If yes, tell me why do we use multiple images?&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Ans&lt;/strong&gt;: Multiple images or Multi stage builds, let you have different images for different stages of the process, just by copying only the necessary components into the final image, you can optimize its size. A common use case is to separate the build and the runtime environments. You might want a Go build stage and copy only the executable into a really small runtime image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. If we use command docker build -t . for building our Docker file, which has a default name of Dockerfile, what will be the command used to build a file with some other name, what will be the flag that will be used?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ans&lt;/strong&gt;: To create a Docker file with a custom name, you can use the -f flag:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -f CustomDockerfile -t myimage .&lt;/code&gt;&lt;br&gt;
This tells Docker to use CustomDockerfile instead of the default Dockerfile.&lt;/p&gt;

&lt;p&gt;𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If I have multiple worker nodes, and a master node, and I need to create a pod in every node for collecting logs, as I need logs of all these nodes, how will you make sure that the pods even if they go down or crash, they would quickly up and running, but make sure within the same node?&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Ans&lt;/strong&gt;: High Availability: Ensure Pods Run on Every Node A DaemonSet is used to ensure a copy of the pod runs on each node in the cluster and, if the pod crashes or a new node is brought up, Kubernetes can automatically recreate the pod on that node.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;br&gt;
kind: DaemonSet&lt;br&gt;
metadata:&lt;br&gt;
  name: log-collector&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: log-collector&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: log-collector&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: log-collector&lt;br&gt;
        image: your-log-collector-image&lt;br&gt;
        # Container configuration&lt;/code&gt;&lt;br&gt;
Your logging pods will run on every node even if they fail or are newly added with DaemonSet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can follow me over LinkedIn:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/saloni-singh-aws/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/saloni-singh-aws/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>interview</category>
      <category>awschallenge</category>
    </item>
    <item>
      <title>Transit Gateway v/s Direct Connect v/s Site-to-Site VPN</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Sun, 27 Oct 2024 16:17:01 +0000</pubDate>
      <link>https://dev.to/rksalo88/transit-gateway-vs-direct-connect-vs-site-to-site-vpn-k31</link>
      <guid>https://dev.to/rksalo88/transit-gateway-vs-direct-connect-vs-site-to-site-vpn-k31</guid>
      <description>&lt;p&gt;Let’s discuss about VPC today, we all must have heard of Transit gateway, Direct Connect and Site-to-Site VPN, all of these seem to sound similar, but what’s the difference between them?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9ww74e87yr3xii7zi69.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9ww74e87yr3xii7zi69.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A 𝗧𝗿𝗮𝗻𝘀𝗶𝘁 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 is a central hub by which you can connect VPCs and on-premises networks within the AWS environment.&lt;br&gt;
𝗗𝗶𝗿𝗲𝗰𝘁 𝗖𝗼𝗻𝗻𝗲𝗰𝘁 does this by creating a direct, dedicated private connection with your on-premises network and AWS.&lt;br&gt;
Creating a 𝗦𝗶𝘁𝗲-𝘁𝗼-𝗦𝗶𝘁𝗲 𝗩𝗣𝗡 creates an encrypted over-the-public-internet “tunnel” to associate your on-premises network with a single AWS VPC.&lt;br&gt;
So, in fact, a Transit Gateway provides for managing multiple VPCs and on-premises networks connections, with Direct Connect offering a direct, high-bandwidth connection, and a Site-to-Site VPN is a fundamental connection using the public Internet for a single VPC link.&lt;/p&gt;

&lt;p&gt;𝗞𝗲𝘆 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀:&lt;/p&gt;

&lt;p&gt;𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘁𝘆:&lt;br&gt;
𝗧𝗿𝗮𝗻𝘀𝗶𝘁 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 : It is central to make the different VPCs and on-premises network connect to each other; it simplifies network management.&lt;br&gt;
𝗗𝗶𝗿𝗲𝗰𝘁 𝗖𝗼𝗻𝗻𝗲𝗰𝘁 : Dedicated and private connection that connects your on-premises network to AWS with high bandwidth with minimal latency.&lt;br&gt;
𝗦𝗶𝘁𝗲-𝘁𝗼-𝗦𝗶𝘁𝗲 𝗩𝗣𝗡 : An encrypted tunnel across the public internet which interconnects your on-premises network to an AWS VPC.&lt;/p&gt;

&lt;p&gt;𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆&lt;br&gt;
𝗧𝗿𝗮𝗻𝘀𝗶𝘁 𝗚𝗮𝘁𝗲𝘄𝗮𝘆: It is highly scalable, thus allowing easy addition of new VPCs or on-premises network connections.&lt;br&gt;
𝗗𝗶𝗿𝗲𝗰𝘁 𝗖𝗼𝗻𝗻𝗲𝗰𝘁: Highly scalable depending on the chosen bandwidth tier.&lt;br&gt;
𝗦𝗶𝘁𝗲-𝘁𝗼-𝗦𝗶𝘁𝗲 𝗩𝗣𝗡: Not as scalable as Direct Connect because it is restrained by public internet bandwidth.&lt;/p&gt;

&lt;p&gt;𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆:&lt;br&gt;
𝗧𝗿𝗮𝗻𝘀𝗶𝘁 𝗚𝗮𝘁𝗲𝘄𝗮𝘆: It is secure in the AWS infrastructure, but on the on-premises connection, there would rely on extra security measures.&lt;br&gt;
𝗗𝗶𝗿𝗲𝗰𝘁 𝗖𝗼𝗻𝗻𝗲𝗰𝘁: Very secure since it is based on a dedicated private connection.&lt;br&gt;
𝗦𝗶𝘁𝗲-𝘁𝗼-𝗦𝗶𝘁𝗲 𝗩𝗣𝗡: Relies on encryption to protect traffic across the internet.&lt;/p&gt;

&lt;p&gt;𝗪𝗵𝗲𝗻 𝘁𝗼 𝘂𝘀𝗲 𝗲𝗮𝗰𝗵:&lt;br&gt;
𝗧𝗿𝗮𝗻𝘀𝗶𝘁 𝗚𝗮𝘁𝗲𝘄𝗮𝘆:&lt;br&gt;
You want to connect multiple VPCs and on-premises networks with complex routing requirements.&lt;br&gt;
𝗗𝗶𝗿𝗲𝗰𝘁 𝗖𝗼𝗻𝗻𝗲𝗰𝘁:&lt;br&gt;
You want a high-bandwidth, dedicated private connection to AWS for large data transfers.&lt;br&gt;
𝗦𝗶𝘁𝗲-𝘁𝗼-𝗦𝗶𝘁𝗲 𝗩𝗣𝗡:&lt;br&gt;
You want a simple way to connect a single on-premises network to an AWS VPC with smaller data volumes.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>vpc</category>
      <category>networking</category>
    </item>
    <item>
      <title>Real World Scenario Based Interview Q/A on AWS Availability Zones and Regions</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Sat, 26 Oct 2024 17:22:15 +0000</pubDate>
      <link>https://dev.to/rksalo88/real-world-scenario-based-interview-qa-on-aws-availability-zones-and-regions-271g</link>
      <guid>https://dev.to/rksalo88/real-world-scenario-based-interview-qa-on-aws-availability-zones-and-regions-271g</guid>
      <description>&lt;p&gt;We are all well-aware fo 𝗔𝗪𝗦 𝗔𝘃𝗮𝗶𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝘇𝗼𝗻𝗲𝘀 𝗮𝗻𝗱 𝗥𝗲𝗴𝗶𝗼𝗻𝘀, here are a few 𝘀𝗰𝗲𝗻𝗮𝗿𝗶𝗼 𝗯𝗮𝘀𝗲𝗱 𝗶𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 for you. You will get tons of posts stating real-world interview questions, but most of them I found are very basic, here I will present some interesting questions, which help you cover the topics well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: You have deployed your application within a single Availability Zone, and you now want to make the same highly available. What would you do?&lt;/strong&gt;&lt;br&gt;
A. I would increase the availability of the applications by at least another availability zone, with at least another instance of the same application in the same region. Subsequently, I would configure an Elastic Load Balancer that will distribute incoming traffic across these instances across the AZs. Finally, Auto Scaling should be configured to further enhance the increase in availability under increased loads or instance failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: One of the services in one Availability Zone has failed. How do you ensure that your application hasn't stopped working and no manual intervention is required?&lt;/strong&gt;&lt;br&gt;
A: I'll ensure that my application is spread across various AZs and ELB would redirect traffic to the healthy instances. In addition to that, I will have Auto Scaling configured so that new instances would be automatically launched in another AZ in case one zone fails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: You have an RDS instance running in one AZ. How do you make it highly available and minimize downtime?&lt;/strong&gt;&lt;br&gt;
A: To make the RDS instance highly available and minimizing downtime, I'd make changes to the RDS instance to enable Multi-AZ deployment. This will create a standby replica of the instance in another AZ. In case the primary instance fails, RDS will failover automatically on the standby instance, hence preventing downtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Ensure that your data is always accessible, even if a complete AWS region is unavailable. How do you approach this solution?&lt;/strong&gt;&lt;br&gt;
A: For multi-region architecture, I would always replicate data and applications across multiple regions. For S3, I can enable cross region replication. For databases, I can use either Amazon Aurora Global Databases or cross region replication in RDS. Aside from that, I would use Route 53 for DNS failover to redirect traffic to a healthy region in case of an outage in one region.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: You have an application with very high read traffic. How can you take advantage of Availability Zones to improve performance?&lt;/strong&gt;&lt;br&gt;
A: I would deploy the read replicas in different AZs to distribute the read traffic. This approach would improve latency and performance because read traffic could now be load-balanced across multiple AZs. As for databases like RDS, I can have the read replicas in multiple AZs and configure my application for forwarding the read requests to those replicas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Your web application needs to provide low latency to users in different geographies. How would you address this?&lt;/strong&gt;&lt;br&gt;
A: I would distribute the application across multiple regions on AWS, along with using Amazon Route 53-enabled Latency-based routing capabilities. Route 53 would then route end users to the nearest region offering low latency to them, thus minimizing the response time. Finally, I would cache the content close to the users through globally using CloudFront.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: A natural disaster affects the data center that hosts your primary AZ. Which AWS service would ensure your EC2 instances were still reachable?&lt;/strong&gt;&lt;br&gt;
A: If using more than one AZ in EC2 instance launch, with an ELB, the ELB will automatically route traffic to instances in good AZs and the application will be kept alive. Auto Scaling can automatically launch new instances of an AS group where at least one has failed, in healthy AZs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: You notice that more geographically dispersed users hitting your application provide higher latency levels. What changes would you implement to minimize latency levels?&lt;/strong&gt;&lt;br&gt;
A: I would use Amazon CloudFront to distribute content at edge locations around the globe. I would be caching static and dynamic content for the reason that it would take less latency to the distinct geographically-dispersed locations now, therefore improving response times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: You want to protect your S3 data in case there is a failure of an Availability Zone. What can you do?&lt;/strong&gt;&lt;br&gt;
A: In general, S3 stores data redundantly across multiple AZs within a region. If I wanted that extra layer of protection I could enable cross-region replication, by replicating that data into a different region if ever there was a full failure at the regional level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Your company wants to move to a new AWS region in order to serve a different market. How would you do it with little downtime?&lt;/strong&gt;&lt;br&gt;
A: First, I will replicate the application in the new region, deploying resources needed there (EC2, RDS, etc.). Then I would make use of data replication services like DMS (Database Migration Service) or cross-region S3 replication. Finally, I will update Route 53 to route traffic to the new region to do a failover swap-over with no downtime.&lt;/p&gt;

&lt;p&gt;For more such stuffs, follow my page.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>interview</category>
      <category>devops</category>
    </item>
    <item>
      <title>Learning AWS Day by Day — Day 83 — Disaster Recovery — Part 3</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Thu, 24 Oct 2024 13:27:58 +0000</pubDate>
      <link>https://dev.to/rksalo88/learning-aws-day-by-day-day-83-disaster-recovery-part-3-45ed</link>
      <guid>https://dev.to/rksalo88/learning-aws-day-by-day-day-83-disaster-recovery-part-3-45ed</guid>
      <description>&lt;p&gt;Exploring AWS !!&lt;/p&gt;

&lt;p&gt;Day 83:&lt;/p&gt;

&lt;p&gt;Disaster Recovery — Part 3&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow4mxegrt2kixbfxn1dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow4mxegrt2kixbfxn1dg.png" alt="Image description" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If a particular region goes down, we have active/passive strategies in AWS implemented using Multi-region Disaster Recovery, where active is the actual active region which is up and running, and passive is the standby.&lt;br&gt;
We also have active-active, where both regions are up and running, in case of failure the&lt;/p&gt;

&lt;p&gt;Business Continuity plan:&lt;br&gt;
Before going for any Disaster Recovery plans, you need to perform a Business Impact Analysis (BIA), what are the consequences for your organisation if this workload fails, you ned to identify how quickly you need to recover the data, how much data loss in time (not in GB or TB), how much data loss can be tolerated. You also need to do risk assessment, like what is the risk of a natural disaster and also geographical impact of these disasters. So, we need tot hink about this before planning the DR strategy and even before planning to build application.&lt;/p&gt;

&lt;p&gt;Recovery Objectives:&lt;br&gt;
How much data can you afford to recreate or lose? — Recovery Point Objective (RPO)&lt;br&gt;
How quickly must you recover, and what is the cost of downtime ? — Recovery Time Objective (RTO)&lt;/p&gt;

&lt;p&gt;RTO is maximum amount of acceptable delay between the interruption of service and restoration of service.&lt;br&gt;
RPO is maximum amount of data that can be afforded to be lost or recreated.&lt;/p&gt;

&lt;p&gt;Disaster Recovery is different in cloud:&lt;br&gt;
DR strategies evolve with technology&lt;br&gt;
— Single AWS Region&lt;br&gt;
Risk of disruption or loss of one data centre&lt;br&gt;
Implement High Availability workload&lt;br&gt;
Do not forget backups&lt;br&gt;
— Multiple AWS Regions&lt;br&gt;
Risk of disruption or loss of multiple data centres&lt;br&gt;
Implement cross region availability&lt;br&gt;
Do not forget backups&lt;/p&gt;

&lt;p&gt;Strategies for Disaster Recovery:&lt;br&gt;
Backup &amp;amp; Restore&lt;br&gt;
Pilot Light&lt;br&gt;
Warm Standby&lt;br&gt;
Multi-Site active/active&lt;/p&gt;

</description>
      <category>aws</category>
      <category>disasterrecovery</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>Learning AWS Day by Day — Day 82 — Disaster Recovery (DR) — Part 2</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Tue, 22 Oct 2024 06:22:00 +0000</pubDate>
      <link>https://dev.to/rksalo88/learning-aws-day-by-day-day-82-disaster-recovery-dr-part-2-1nnj</link>
      <guid>https://dev.to/rksalo88/learning-aws-day-by-day-day-82-disaster-recovery-dr-part-2-1nnj</guid>
      <description>&lt;p&gt;Exploring AWS!&lt;/p&gt;

&lt;p&gt;Day 82&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag8hc9a4atj1hkbxlk5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag8hc9a4atj1hkbxlk5i.png" alt="Image description" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Disaster Recovery (DR) — Part 2&lt;/p&gt;

&lt;p&gt;How to implement High Availability Vs How to implement Disaster Recovery:&lt;/p&gt;

&lt;p&gt;The diagram shown (above) gives you very resilient High Availability architecture. S3 and Dynamodb or NoSQL database are not in AZ as these are designed in a way by AWS that they are automatically replicated across AZ. The data they store is replicated across multiple AZ, that gives you High Availability and high durability of that data already built in for you.&lt;/p&gt;

&lt;p&gt;So, if there’s a flood one AZ goes down, still another AZ is up and running. These AZs are designed specifically so that they dont share fate, and other 2 AZs are fine. Other than this, what else can be done is use Backups.&lt;/p&gt;

&lt;p&gt;If you have EBS attached and they are in zones, you should keep backups which give you disaster resiliency which is a disaster recovery strategy.&lt;br&gt;
EBS -&amp;gt; Volumes -&amp;gt; Snapshots&lt;/p&gt;

&lt;p&gt;Another example is human actions, like let’s say accidental deletion or data corruption or bad actor, in that case replication is not good, you will replicate corrupted data, so here we need point in time backups. You can see Dynamodb, so we can go back to last good state and restore the data. Similarly, S3 has it’s versioning concept.&lt;/p&gt;

&lt;p&gt;DynamoDB -&amp;gt; Point in time deploy&lt;br&gt;
S3 -&amp;gt; Versioning&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>learning</category>
      <category>disasterrecovery</category>
    </item>
    <item>
      <title>Learning AWS Day by Day — Day 81 — Disaster Recovery — Part 1</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Mon, 21 Oct 2024 11:00:11 +0000</pubDate>
      <link>https://dev.to/rksalo88/learning-aws-day-by-day-day-81-disaster-recovery-part-1-21bh</link>
      <guid>https://dev.to/rksalo88/learning-aws-day-by-day-day-81-disaster-recovery-part-1-21bh</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl46f4ss5tfhf56l0xmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl46f4ss5tfhf56l0xmk.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exploring AWS!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 81:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disaster Recovery (DR) — Part 1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For any tools or services you use, we prefer having some backup always, or atleast something that can support you in the times when you really need something and the main primary part has failed. Lets say, your battery just drains out and there is no power, your power bank always saves you there, but imagined what if there was no power bank as well? And you had some important call to attend?&lt;br&gt;
Do you remember the Flight 777, it gives us an idea why resiliency is important, that was the plane which caught fire and somehow managed to land with only one engine working, so they had to keep the high availability and proper maintenance for these jet engines which are a business contiguity.&lt;br&gt;
Now you see, its important to have a disaster recovery, even if you have a backup&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About Business Contiguity:&lt;/strong&gt;&lt;br&gt;
There are various types of disasters, some larger scaled, less frequent events like Natural disasters including floods or fire or tsunamis or might be earthquakes. Then we have technical disasters like human actions, maybe hacking or intentional deletions.&lt;br&gt;
This contiguity term measures a one time event: Recovery Time, Recovery Point.&lt;br&gt;
High Availability: About application availability, smaller scaled which have to face more frequent events like component failures, network issues, load spikes, etc.&lt;br&gt;
The measures mean over time which are ‘The 9s’ (99.99% availability).&lt;br&gt;
So, Disaster Recovery is recovering from a loss of service due to some event or disaster, whereas High Availability is all about preventing these.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a disaster?&lt;/strong&gt;&lt;br&gt;
We all know about AWS Regions and AZs, these regions are the physical locations around the world where we cluster data centres. 33 geographic regions around the world, with announced plans for 12 more AZs and 4 more regions in Germany, Malaysia, New Zealand and Thailand.&lt;br&gt;
These AZs, they are placed at proper distance for better operation to avoid disasters, but close enough to allow synchronous replications. So you can say, maximum distance of 60 miles or 100 kms.&lt;br&gt;
We need to think of types of disasters when planning these DR strategies and what can be the outcome and effect of such a disaster on your data.&lt;br&gt;
Shared Responsibility model for Resiliency: AWS &amp;amp; Customer responsible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customer Responsibility for resiliency ‘IN’ the cloud:&lt;/strong&gt;&lt;br&gt;
Server data backup&lt;br&gt;
Workload architecture&lt;br&gt;
Change management&lt;br&gt;
Failure management&lt;br&gt;
Networking, Quotas and Constraints&lt;/p&gt;

&lt;p&gt;**AWS Responsibilty for resiliency ‘OF’ the cloud:&lt;br&gt;
**Hardware and services&lt;br&gt;
Compute, Storage Database, Networking&lt;br&gt;
AWS Global Infrastructure&lt;br&gt;
Regions, AZs, Edge locations&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: High Availability is not a DR.&lt;/p&gt;

</description>
      <category>disaster</category>
      <category>aws</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>Learning AWS Day by Day — Day 80 — Amazon Cloud Directory</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Tue, 04 Jun 2024 17:11:44 +0000</pubDate>
      <link>https://dev.to/rksalo88/learning-aws-day-by-day-day-80-amazon-cloud-directory-5b5f</link>
      <guid>https://dev.to/rksalo88/learning-aws-day-by-day-day-80-amazon-cloud-directory-5b5f</guid>
      <description>&lt;p&gt;Exploring AWS !!&lt;/p&gt;

&lt;p&gt;Day 80&lt;/p&gt;

&lt;p&gt;AWS Cloud Directory&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq175xqjumoz73snn91n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq175xqjumoz73snn91n.png" alt="Image description" width="490" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A directory-based store in AWS, where directories can scale to millions of objects. No need of managing the directory infrastructure, just focus on your development and deployment of the application. There is no limit to organize the directory objects in fixed hierarchy.&lt;br&gt;
We can use Cloud Directory to organize directory objects in multiple hierarchies supporting many organizational pivots and relationships across directory information. For example, a directory of users may provide a hierarchical view based on reporting structure, location, and project affiliation. Similarly, a directory of devices may have multiple hierarchical views based on its manufacturer, current owner, and physical location.&lt;/p&gt;

&lt;p&gt;We can do following with Cloud Directory:&lt;br&gt;
Create directory-based applications easily and without having to worry about deployment, global scale, availability, and performance&lt;br&gt;
Build applications that provide user and group management, permissions or policy management, device registry, customer management, address books, and application or product catalogs&lt;br&gt;
Define new directory objects or extend existing types to meet their application needs, reducing the code they need to write&lt;br&gt;
Reduce the complexity of layering applications on top of Cloud Directory&lt;br&gt;
Manage the evolution of schema information over time, ensuring future compatibility for consumers&lt;/p&gt;

&lt;p&gt;Cloud Directory is not a directory service for IT Administrators who want to manage or migrate their directory infrastructure.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>cloud</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Learning AWS Day by Day — Day 79 — Amazon MQ</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Thu, 30 May 2024 17:03:26 +0000</pubDate>
      <link>https://dev.to/rksalo88/learning-aws-day-by-day-day-79-amazon-mq-529p</link>
      <guid>https://dev.to/rksalo88/learning-aws-day-by-day-day-79-amazon-mq-529p</guid>
      <description>&lt;p&gt;Exploring AWS !!&lt;/p&gt;

&lt;p&gt;Day 79&lt;/p&gt;

&lt;p&gt;Amazon MQ&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyd4idmcdrb1trbmfx9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyd4idmcdrb1trbmfx9q.png" alt="Image description" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Message broker service that makes it easier to migrate to a message broker. A message broker allows software applications and components to communicate using various programming languages, OS, and formal messaging protocols.&lt;br&gt;
Amazon MQ supports Apache ActiveMQ Classic and RabbitMQ engine types.&lt;br&gt;
Amazon MQ works with your applications and services without the need to manage, operate, or maintain your own messaging system.&lt;/p&gt;

&lt;p&gt;Difference between Amazon MQ and SQS or SNS&lt;br&gt;
MQ is a message broker service providing compatibility with many popular message brokers. Amazon MQ is recommended for migrating applications from message brokers relying on compatibility with APIs like JMS or protocols such as AMQP 0–9–1, AMQP 1.0, MQTT, OpenWire, and STOMP.&lt;br&gt;
Amazon SQS and Amazon SNS are queue and topic services — highly scalable, simple to use, and message brokers set up is not required. These services are better option for new applications that can benefit from nearly unlimited scalability and simple APIs.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>cloudcomputing</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Learning AWS Day by Day — Day 78 — Amazon DocumentDB</title>
      <dc:creator>Saloni Singh</dc:creator>
      <pubDate>Wed, 29 May 2024 17:12:19 +0000</pubDate>
      <link>https://dev.to/rksalo88/learning-aws-day-by-day-day-78-amazon-documentdb-4b4n</link>
      <guid>https://dev.to/rksalo88/learning-aws-day-by-day-day-78-amazon-documentdb-4b4n</guid>
      <description>&lt;p&gt;Exploring AWS !!&lt;/p&gt;

&lt;p&gt;Day 78&lt;/p&gt;

&lt;p&gt;Amazon DocumentDB&lt;/p&gt;

&lt;p&gt;Amazon DocumentDB (with MongoDB Capability) is fast, scalable, highly available database service supporting MongoDB workloads. It makes easy for us to store and index JSON data.&lt;br&gt;
This is a non-relational database service designed from basic to give you better performance and scalability when operating critical MongoDB workloads at scale. In DocumentDB, storage and compute are decoupled, allowing to scale independently. The read capacity to millions of request per second can be increased by adding upto 15 low latency read replicas.&lt;br&gt;
We can use same drivers, same code and same tool those you use with MongoDB.&lt;/p&gt;

&lt;p&gt;When using DocumentDB, we start by creating clusters. A cluster contains instances and volumes to manage the storage for that instance.&lt;br&gt;
The cluster consists of 2 components:&lt;br&gt;
Cluster volumes: DocumentDB has one cluster storage volume, storing 128 TB of data.&lt;br&gt;
Instances: It can contains 0–16 instances.&lt;/p&gt;

&lt;p&gt;DocumentDB provides multiple connection options, and to connect with the instance we specify the instance’s endpoint. An endpoint is a host address and a port number, separated by a colon.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>cloudcomputing</category>
    </item>
  </channel>
</rss>
