DEV Community

Cover image for Latest Interview QnA faced recently as a DevOps Engineer
Saloni Singh
Saloni Singh

Posted on

Latest Interview QnA faced recently as a DevOps Engineer

Hello Connections,

I usually try attending interviews to bring latest questions for you, I recently attended an interview and here are some questions which I encountered:

𝗕𝗿𝗮𝗻𝗰𝗵𝗶𝗻𝗴 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆:
Ques.: If we have some application being developed from scratch, what branching strategies would you suggest? What will be the questions you would ask?
Ans: For a greenfield application, it is generally recommended to use a trunk-based branching strategy where there is one central branch that contains the production-ready code. Developers are creating short-lived feature branches for new features or bug fixes and then merge these back into the main branch when completed. As the project grows, a GitFlow or GitHub Flow might be useful for managing features, releases, and hotfixes.

Questions to Ask:
Deployment Frequency: How often will the application be deployed?
Team Size and Coordination: How many developers are working on the application? What communication style does people use?
Release Management: Are there specific release cycles or timelines to be considered?
CI/CD Pipeline: Will you do automated testing with CI/CD pipelines for merging and deployment of code?

𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺:

Ques: Let’s say I have two resource blocks, and each has a Bool value, one of it has True and another has False, I want to execute every block according to the value set, like if True, then first block, if False then second, how would that be possible?
Ans: Conditional Execution of Resource Blocks Based on Bool Values: To conditionally execute a block based upon a boolean you can use count with conditions:

resource "aws_resource" "example" {
count = var.bool_value ? 1 : 0
# Resource configuration here
}

Only the resource with count = 1 will be executed based on the value of the condition.

2. Difference between for_each and count?

Ans: Below are the differences:
count: Multi-instance of a resource, based on an integer. Simple replication-friendly.
for_each: Iterate over a set like a map or list, creating unique resources per item. The for_each variable determines that the Resource has an unique identifier which is different for every item in it.

3. If I have two resource blocks, first block has 2 load balancers and second has 2 target groups, I wish to have my first load balancer get attached to first target group, and second load balancer with second target group, how would I do that dynamically, without hard coding anything?

Ans: This will be achieved by using for_each to iterate over each load balancer and mapping them to target groups, for example:

`variable "load_balancers" {
type = list(string)
default = ["lb1", "lb2"]
}

resource "aws_lb" "lb" {
for_each = toset(var.load_balancers)
# Load balancer configuration
}

resource "aws_lb_target_group" "tg" {
for_each = toset(var.load_balancers)
# Target group configuration
}

resource "aws_lb_target_group_attachment" "attachment" {
for_each = aws_lb.lb
target_group_arn = aws_lb_target_group.tg[each.key].arn
# Attach load balancer to target group
}`
All load balancers will dynamically attach to a specific target group without being hardcoded.

𝗗𝗼𝗰𝗸𝗲𝗿:

Have you ever worked with multiple images in file? If yes, tell me why do we use multiple images?
Ans: Multiple images or Multi stage builds, let you have different images for different stages of the process, just by copying only the necessary components into the final image, you can optimize its size. A common use case is to separate the build and the runtime environments. You might want a Go build stage and copy only the executable into a really small runtime image.

2. If we use command docker build -t . for building our Docker file, which has a default name of Dockerfile, what will be the command used to build a file with some other name, what will be the flag that will be used?

Ans: To create a Docker file with a custom name, you can use the -f flag:

docker build -f CustomDockerfile -t myimage .
This tells Docker to use CustomDockerfile instead of the default Dockerfile.

𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀:

If I have multiple worker nodes, and a master node, and I need to create a pod in every node for collecting logs, as I need logs of all these nodes, how will you make sure that the pods even if they go down or crash, they would quickly up and running, but make sure within the same node?
Ans: High Availability: Ensure Pods Run on Every Node A DaemonSet is used to ensure a copy of the pod runs on each node in the cluster and, if the pod crashes or a new node is brought up, Kubernetes can automatically recreate the pod on that node.

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-collector
spec:
selector:
matchLabels:
app: log-collector
template:
metadata:
labels:
app: log-collector
spec:
containers:
- name: log-collector
image: your-log-collector-image
# Container configuration

Your logging pods will run on every node even if they fail or are newly added with DaemonSet.

You can follow me over LinkedIn: https://www.linkedin.com/in/saloni-singh-aws/

Top comments (0)