<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dmitrii Kotov</title>
    <description>The latest articles on DEV Community by Dmitrii Kotov (@tutunak).</description>
    <link>https://dev.to/tutunak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tutunak"/>
    <language>en</language>
    <item>
      <title>Why Idempotence Matters in CI/CD Pipeline Build Steps</title>
      <dc:creator>Dmitrii Kotov</dc:creator>
      <pubDate>Tue, 26 Nov 2024 13:07:07 +0000</pubDate>
      <link>https://dev.to/tutunak/why-idempotence-matters-in-cicd-pipeline-build-steps-4ka</link>
      <guid>https://dev.to/tutunak/why-idempotence-matters-in-cicd-pipeline-build-steps-4ka</guid>
      <description>&lt;p&gt;Recently, I was caught off guard by a question: why should the steps of a build script in a pipeline be idempotent? Why can't we build and push the container every time? Why is idempotence so important?&lt;/p&gt;

&lt;p&gt;Idempotence in pipeline build steps is important for many reasons.&lt;/p&gt;

&lt;h3&gt;
  
  
  Predictability and Stability:
&lt;/h3&gt;

&lt;p&gt;When a step in a pipeline is idempotent, it means that running it multiple times will always give the same result. This is very important for CI/CD pipelines because the same build might be triggered more than once. For example, if there is a small error, or someone makes changes to the code, or during testing, you might need to run the pipeline again. If the steps are idempotent, you don’t have to worry about unexpected results. Everything will work the same way every time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Avoiding Wasting Resources:
&lt;/h3&gt;

&lt;p&gt;If you build and push a container every time, even when nothing has changed, it can waste resources. Each time you push, it creates a new version of the container in the registry, even if it’s exactly the same as the previous one. This takes up storage space and can make things harder to manage. It can also cost more money if you’re using cloud services to store these containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preventing Errors:
&lt;/h3&gt;

&lt;p&gt;When steps are not idempotent, they might behave differently each time you run them. This can cause bugs that are difficult to understand and fix. For example, if a build step depends on a file that changes randomly or a tool that behaves differently depending on the environment, the pipeline can fail in unpredictable ways. Idempotent steps make the pipeline more reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Saving Time with Caching:
&lt;/h3&gt;

&lt;p&gt;Idempotent steps allow you to use caching. This means that if something has already been built or processed, it doesn’t need to be done again. For example, if your container image has several layers and only one of them has changed, you can reuse the unchanged layers. This can save a lot of time during the build process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Easier Debugging:
&lt;/h3&gt;

&lt;p&gt;When your pipeline is predictable, it’s much easier to find out what went wrong if something fails. You can check each step and know that it works the same way every time. If the pipeline behaves differently on every run, debugging becomes a nightmare.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why shouldn’t we rebuild the container every time?
&lt;/h3&gt;

&lt;p&gt;If the code or dependencies haven’t changed, rebuilding the container is unnecessary. It doesn’t add any value but uses extra time, CPU, and storage. Every new build creates a new version of the container image, which can make it hard to manage the versions and figure out which one you actually need. It’s better to reuse existing builds whenever possible.&lt;/p&gt;

&lt;p&gt;In summary, idempotence is important because it makes the pipeline stable, saves resources, prevents errors, and makes everything faster and easier to manage. It’s like creating a system where you know exactly what will happen, no matter how many times you use it.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>docker</category>
    </item>
    <item>
      <title>Kubernetes Interview Questions: Kubernetes Architecture: Node</title>
      <dc:creator>Dmitrii Kotov</dc:creator>
      <pubDate>Mon, 18 Dec 2023 09:00:00 +0000</pubDate>
      <link>https://dev.to/tutunak/kubernetes-interview-questions-kubernetes-architecture-node-26ef</link>
      <guid>https://dev.to/tutunak/kubernetes-interview-questions-kubernetes-architecture-node-26ef</guid>
      <description>&lt;h4&gt;
  
  
  What are Nodes in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;Nodes are where Kubernetes places containers into Pods to run. A node may be a virtual or physical machine, depending on the cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Who manages each node in a Kubernetes cluster?
&lt;/h4&gt;

&lt;p&gt;Each node is managed by the control plane and contains the services necessary to run Pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the key components on a node in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;The components on a node include the kubelet, a container runtime, and the kube-proxy.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the two main ways to add Nodes to the API server in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;The two main ways are: 1) The kubelet on a node self-registers to the control plane, and 2) You (or another human user) manually add a Node object.&lt;/p&gt;

&lt;h4&gt;
  
  
  What happens after a Node object is created or a kubelet self-registers in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;After a Node object is created, or a kubelet on a node self-registers, the control plane checks whether the new Node object is valid.&lt;/p&gt;

&lt;h4&gt;
  
  
  When is a node considered eligible to run a Pod in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;A node is eligible to run a Pod if it is healthy, meaning all necessary services are running.&lt;/p&gt;

&lt;h4&gt;
  
  
  What happens if a node is not healthy in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;If the node is not healthy, it is ignored for any cluster activity until it becomes healthy. Kubernetes keeps the object for the invalid Node and continues checking to see whether it becomes healthy.&lt;/p&gt;

&lt;h4&gt;
  
  
  How can you stop health checking on an invalid Node in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;You, or a controller, must explicitly delete the Node object to stop that health checking.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the requirement for the name of a Node object in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;The name of a Node object must be a valid DNS subdomain name.&lt;/p&gt;

&lt;h4&gt;
  
  
  What issues can arise if a Node instance is modified without changing its name?
&lt;/h4&gt;

&lt;p&gt;This may lead to inconsistencies if an instance was modified without changing its name.&lt;/p&gt;

&lt;h4&gt;
  
  
  What should be done if a Node needs to be replaced or updated significantly in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;If the Node needs to be replaced or updated significantly, the existing Node object needs to be removed from the API server first and re-added after the update.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the preferred pattern for the registration of Nodes in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;The preferred pattern, used by most distros, is self-registration of Nodes.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is a good practice when Node configuration needs to be updated in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;It is a good practice to re-register the node with the API server when Node configuration needs to be updated.&lt;/p&gt;

&lt;h4&gt;
  
  
  What issues can arise if the Node configuration is changed on kubelet restart while Pods are already scheduled on the Node?
&lt;/h4&gt;

&lt;p&gt;Pods already scheduled on the Node may misbehave or cause issues if the Node configuration is changed on kubelet restart.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why is Node re-registration important in the context of updating Node configuration?
&lt;/h4&gt;

&lt;p&gt;Node re-registration ensures all Pods will be drained and properly re-scheduled, maintaining the integrity of the cluster’s operations.&lt;/p&gt;

&lt;h4&gt;
  
  
  How can you create and modify Node objects in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;You can create and modify Node objects using kubectl.&lt;/p&gt;

&lt;h4&gt;
  
  
  What should you set the kubelet flag to when creating Node objects manually?
&lt;/h4&gt;

&lt;p&gt;When you want to create Node objects manually, set the kubelet flag &lt;code&gt;--register-node=false&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  What does marking a node as unschedulable do?
&lt;/h4&gt;

&lt;p&gt;Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node but does not affect existing Pods on the Node. This is useful as a preparatory step before a node reboot or other maintenance.&lt;/p&gt;

&lt;h4&gt;
  
  
  How do you mark a Node as unschedulable in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;To mark a Node unschedulable, you run the command &lt;code&gt;kubectl cordon $NODENAME&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is a special case where Pods will still run on an unschedulable Node?
&lt;/h4&gt;

&lt;p&gt;Pods that are part of a DaemonSet tolerate being run on an unschedulable Node. DaemonSets typically provide node-local services that should run on the Node even if it is being drained of workload applications.&lt;/p&gt;

&lt;h4&gt;
  
  
  What information does a Node's status contain in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;A Node's status contains the following information: Addresses, Conditions, Capacity and Allocatable, and Info.&lt;/p&gt;

&lt;h4&gt;
  
  
  How can you view a Node's status and other details in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;You can view a Node's status and other details using the command &lt;code&gt;kubectl describe node &amp;lt;insert-node-name-here&amp;gt;&lt;/code&gt;. &lt;/p&gt;

&lt;h4&gt;
  
  
  What is the purpose of heartbeats sent by Kubernetes nodes?
&lt;/h4&gt;

&lt;p&gt;Heartbeats sent by Kubernetes nodes help the cluster determine the availability of each node, and to take action when failures are detected.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the two forms of heartbeats for nodes in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;The two forms of heartbeats for nodes are: 1) Updates to the &lt;code&gt;.status&lt;/code&gt; of a Node, and 2) Lease objects within the kube-node-lease namespace.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the role of the node controller in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;The node controller is a Kubernetes control plane component that manages various aspects of nodes.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the first role of the node controller in a node's life?
&lt;/h4&gt;

&lt;p&gt;The first role is assigning a CIDR block to the node when it is registered, if CIDR assignment is turned on.&lt;/p&gt;

&lt;h4&gt;
  
  
  What happens if the node controller finds that the VM for an unhealthy node is not available?
&lt;/h4&gt;

&lt;p&gt;If the VM for an unhealthy node is not available, the node controller deletes the node from its list of nodes.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is one of the responsibilities of the node controller regarding node health?
&lt;/h4&gt;

&lt;p&gt;The node controller is responsible for updating the Ready condition in the Node's &lt;code&gt;.status&lt;/code&gt; field if a node becomes unreachable, setting it to Unknown.&lt;/p&gt;

&lt;h4&gt;
  
  
  What action does the node controller take if a node remains unreachable?
&lt;/h4&gt;

&lt;p&gt;If a node remains unreachable, the node controller triggers API-initiated eviction for all of the Pods on the unreachable node.&lt;/p&gt;

&lt;h4&gt;
  
  
  How long does the node controller wait before submitting the first eviction request for an unreachable node?
&lt;/h4&gt;

&lt;p&gt;By default, the node controller waits 5 minutes between marking the node as Unknown and submitting the first eviction request.&lt;/p&gt;

&lt;h4&gt;
  
  
  How often does the node controller check the state of each node by default?
&lt;/h4&gt;

&lt;p&gt;By default, the node controller checks the state of each node every 5 seconds.&lt;/p&gt;

&lt;h4&gt;
  
  
  Can the period for checking the state of each node by the node controller be configured?
&lt;/h4&gt;

&lt;p&gt;Yes, this period can be configured using the &lt;code&gt;--node-monitor-period&lt;/code&gt; flag on the kube-controller-manager component.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the default eviction rate limit set by the node controller?
&lt;/h4&gt;

&lt;p&gt;The node controller limits the eviction rate to &lt;code&gt;--node-eviction-rate&lt;/code&gt; (default 0.1) per second, meaning it won't evict pods from more than 1 node per 10 seconds.&lt;/p&gt;

&lt;h4&gt;
  
  
  How does the node eviction behavior change when a node in an availability zone becomes unhealthy?
&lt;/h4&gt;

&lt;p&gt;When a node in an availability zone becomes unhealthy, the node controller checks the percentage of unhealthy nodes and may reduce the eviction rate if a certain threshold of unhealthy nodes is reached.&lt;/p&gt;

&lt;h4&gt;
  
  
  What happens to the eviction rate if the fraction of unhealthy nodes is at least &lt;code&gt;--unhealthy-zone-threshold&lt;/code&gt;?
&lt;/h4&gt;

&lt;p&gt;If the fraction of unhealthy nodes is at least &lt;code&gt;--unhealthy-zone-threshold&lt;/code&gt; (default 0.55), then the eviction rate is reduced.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the node eviction policy in small clusters?
&lt;/h4&gt;

&lt;p&gt;In small clusters (less than or equal to &lt;code&gt;--large-cluster-size-threshold&lt;/code&gt; nodes, default 50), evictions are stopped.&lt;/p&gt;

&lt;h4&gt;
  
  
  How is the eviction rate adjusted in larger clusters with unhealthy nodes?
&lt;/h4&gt;

&lt;p&gt;In larger clusters with unhealthy nodes, the eviction rate is reduced to &lt;code&gt;--secondary-node-eviction-rate&lt;/code&gt; (default 0.01) per second.&lt;/p&gt;

&lt;h4&gt;
  
  
  Does the node controller consider per-zone unavailability if the cluster does not span multiple cloud provider availability zones?
&lt;/h4&gt;

&lt;p&gt;If the cluster does not span multiple cloud provider availability zones, then the eviction mechanism does not take per-zone unavailability into account.&lt;/p&gt;

&lt;h4&gt;
  
  
  What happens to the eviction rate if all nodes in a zone are unhealthy?
&lt;/h4&gt;

&lt;p&gt;If all nodes in a zone are unhealthy, the node controller evicts at the normal rate of &lt;code&gt;--node-eviction-rate&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the node controller's policy for evictions in cases where all zones are completely unhealthy?
&lt;/h4&gt;

&lt;p&gt;If all zones are completely unhealthy, the node controller assumes a connectivity issue and does not perform any evictions.&lt;/p&gt;

&lt;h4&gt;
  
  
  What type of information about resource capacity do Node objects track in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;Node objects track information about the Node's resource capacity, such as the amount of memory available and the number of CPUs.&lt;/p&gt;

&lt;h4&gt;
  
  
  How do Nodes that self-register report their capacity?
&lt;/h4&gt;

&lt;p&gt;Nodes that self-register report their capacity during the registration process.&lt;/p&gt;

&lt;h4&gt;
  
  
  What does the Kubernetes scheduler ensure regarding resources on a Node?
&lt;/h4&gt;

&lt;p&gt;The Kubernetes scheduler ensures that there are enough resources for all the Pods on a Node.&lt;/p&gt;

&lt;h4&gt;
  
  
  How does the scheduler determine if a Node has enough resources?
&lt;/h4&gt;

&lt;p&gt;The scheduler checks that the sum of the requests of containers on the node is no greater than the node's capacity.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is excluded from the sum of requests when the scheduler checks a Node's capacity?
&lt;/h4&gt;

&lt;p&gt;The sum of requests excludes any containers started directly by the container runtime and any processes running outside of the kubelet's control.&lt;/p&gt;

&lt;h4&gt;
  
  
  Where can you find information about reserving resources for non-Pod processes?
&lt;/h4&gt;

&lt;p&gt;Information about explicitly reserving resources for non-Pod processes can be found in the section "reserve resources for system daemons."&lt;/p&gt;

&lt;h4&gt;
  
  
  What does the kubelet do during a node system shutdown in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;The kubelet attempts to detect node system shutdown and terminates pods running on the node.&lt;/p&gt;

&lt;h4&gt;
  
  
  How does the kubelet handle pods during a node shutdown?
&lt;/h4&gt;

&lt;p&gt;During a node shutdown, the kubelet ensures that pods follow the normal pod termination process and does not accept new Pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  What does the Graceful node shutdown feature depend on?
&lt;/h4&gt;

&lt;p&gt;The Graceful node shutdown feature depends on systemd, using systemd inhibitor locks to delay the node shutdown.&lt;/p&gt;

&lt;h4&gt;
  
  
  Is the GracefulNodeShutdown feature gate enabled by default in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;Yes, the GracefulNodeShutdown feature gate is enabled by default in Kubernetes.&lt;/p&gt;

&lt;h4&gt;
  
  
  How is the node marked during a shutdown, and how does the kube-scheduler respond?
&lt;/h4&gt;

&lt;p&gt;Once systemd detects a node shutdown, the kubelet sets a NotReady condition on the Node with the reason "node is shutting down". The kube-scheduler honors this condition and does not schedule any new Pods onto the affected node.&lt;/p&gt;

&lt;h4&gt;
  
  
  How are pods terminated by the kubelet during a graceful shutdown?
&lt;/h4&gt;

&lt;p&gt;During a graceful shutdown, the kubelet terminates pods in two phases: first, it terminates regular pods running on the node, and then it terminates critical pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  What happens to the Node and Pods if a node termination is cancelled?
&lt;/h4&gt;

&lt;p&gt;If node termination is cancelled, the Node returns to the Ready state, but Pods that started the termination process will not be restored and need to be re-scheduled.&lt;/p&gt;

&lt;h4&gt;
  
  
  How are pods marked and shown in &lt;code&gt;kubectl&lt;/code&gt; when evicted during a graceful node shutdown?
&lt;/h4&gt;

&lt;p&gt;Pods evicted during a graceful node shutdown are marked as shutdown, with their status shown as Terminated in &lt;code&gt;kubectl get pods&lt;/code&gt;, and &lt;code&gt;kubectl describe pod&lt;/code&gt; indicating that the pod was terminated due to imminent node shutdown.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the two phases of pod shutdown in the Graceful Node Shutdown feature?
&lt;/h4&gt;

&lt;p&gt;The Graceful Node Shutdown feature shuts down pods in two phases: non-critical pods, followed by critical pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  What happens to pods that are part of a StatefulSet during an undetected node shutdown?
&lt;/h4&gt;

&lt;p&gt;During an undetected node shutdown, pods that are part of a StatefulSet get stuck in terminating status on the shutdown node and cannot move to a new running node.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why can't the StatefulSet create a new pod with the same name during an undetected node shutdown?
&lt;/h4&gt;

&lt;p&gt;The StatefulSet cannot create a new pod with the same name because the kubelet on the shutdown node is not available to delete the pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  What happens to the volumes used by pods during an undetected node shutdown?
&lt;/h4&gt;

&lt;p&gt;If there are volumes used by the pods, the VolumeAttachments will not be deleted from the original shutdown node, so the volumes used by these pods cannot be attached to a new running node.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the two phases of pod termination during a non-graceful shutdown?
&lt;/h4&gt;

&lt;p&gt;The two phases are: 1) Force delete the Pods that do not have matching out-of-service tolerations, and 2) Immediately perform detach volume operations for such pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is a potential warning regarding the use of swap memory in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;When the memory swap feature is turned on, there is a risk that Kubernetes data such as the content of Secret objects written to tmpfs could be swapped to disk.&lt;/p&gt;

&lt;h4&gt;
  
  
  How can a user configure how a node uses swap memory?
&lt;/h4&gt;

&lt;p&gt;A user can configure the node's use of swap memory by setting &lt;code&gt;memorySwap.swapBehavior&lt;/code&gt;, for example, to &lt;code&gt;UnlimitedSwap&lt;/code&gt; or &lt;code&gt;LimitedSwap&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the support status of swap with different cgroup versions?
&lt;/h4&gt;

&lt;p&gt;Swap is supported only with cgroup v2, and cgroup v1 is not supported.&lt;/p&gt;

&lt;h4&gt;
  
  
  What does the process of safely draining a node involve?
&lt;/h4&gt;

&lt;p&gt;Safely draining a node involves using &lt;code&gt;kubectl drain&lt;/code&gt; to safely evict all pods from a node before performing maintenance, optionally respecting the PodDisruptionBudget defined.&lt;/p&gt;

&lt;h4&gt;
  
  
  What does &lt;code&gt;kubectl drain&lt;/code&gt; do?
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;kubectl drain&lt;/code&gt; safely evicts all pods from a node, respecting the PodDisruptionBudgets specified, in preparation for maintenance like kernel upgrades or hardware maintenance.&lt;/p&gt;

&lt;h4&gt;
  
  
  How do you drain a node with DaemonSet pods?
&lt;/h4&gt;

&lt;p&gt;When draining a node with DaemonSet pods, use &lt;code&gt;kubectl drain --ignore-daemonsets &amp;lt;node name&amp;gt;&lt;/code&gt; as the DaemonSet controller immediately replaces missing Pods with new ones.&lt;/p&gt;

&lt;h4&gt;
  
  
  What should you do with the node during maintenance and after it's completed?
&lt;/h4&gt;

&lt;p&gt;During maintenance, power down the node or delete its VM. After maintenance, if the node remains in the cluster, use &lt;code&gt;kubectl uncordon &amp;lt;node name&amp;gt;&lt;/code&gt; to resume scheduling new pods onto the node.&lt;/p&gt;

&lt;h4&gt;
  
  
  Can you drain multiple nodes in parallel?
&lt;/h4&gt;

&lt;p&gt;Yes, you can run multiple &lt;code&gt;kubectl drain&lt;/code&gt; commands for different nodes in parallel, and they will still respect the PodDisruptionBudget specified.&lt;/p&gt;

&lt;h4&gt;
  
  
  What alternative is there to using &lt;code&gt;kubectl drain&lt;/code&gt; for evictions?
&lt;/h4&gt;

&lt;p&gt;As an alternative to &lt;code&gt;kubectl drain&lt;/code&gt;, you can programmatically cause evictions using the eviction API for finer control over the pod eviction process.&lt;/p&gt;

&lt;h4&gt;
  
  
  What information does a Node's status contain in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;A Node's status contains Addresses, Conditions, Capacity and Allocatable, and Info.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the different types of Addresses found in a Node's status?
&lt;/h4&gt;

&lt;p&gt;The types of Addresses include HostName, ExternalIP, and InternalIP, which vary depending on the cloud provider or bare metal configuration.&lt;/p&gt;

&lt;h4&gt;
  
  
  How are taints related to node conditions in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;When problems occur on nodes, Kubernetes automatically creates taints matching the conditions affecting the node, like &lt;code&gt;node.kubernetes.io/unreachable&lt;/code&gt; or &lt;code&gt;node.kubernetes.io/not-ready&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>interview</category>
      <category>career</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Interview Questions: Kubernetes Components</title>
      <dc:creator>Dmitrii Kotov</dc:creator>
      <pubDate>Sun, 03 Dec 2023 07:37:59 +0000</pubDate>
      <link>https://dev.to/tutunak/kubernetes-interview-questions-kubernetes-components-1lge</link>
      <guid>https://dev.to/tutunak/kubernetes-interview-questions-kubernetes-components-1lge</guid>
      <description>&lt;h4&gt;
  
  
  What does a Kubernetes cluster consist of?
&lt;/h4&gt;

&lt;p&gt;A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the minimum number of worker nodes a Kubernetes cluster can have?
&lt;/h4&gt;

&lt;p&gt;Every cluster has at least one worker node.&lt;/p&gt;

&lt;h4&gt;
  
  
  What do the worker nodes in a Kubernetes cluster host?
&lt;/h4&gt;

&lt;p&gt;The worker node(s) host the pods that are the components of the application workload.&lt;/p&gt;

&lt;h4&gt;
  
  
  Who manages the worker nodes and the Pods in a Kubernetes cluster?
&lt;/h4&gt;

&lt;p&gt;The control plane manages the worker nodes and the Pods in the cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Can control plane components be run on any machine in the cluster?
&lt;/h4&gt;

&lt;p&gt;Yes, control plane components can be run on any machine in the cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Are user containers run on the machine hosting the control plane components?
&lt;/h4&gt;

&lt;p&gt;No, user containers are not run on the machine hosting the control plane components.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the components of a control plane?
&lt;/h4&gt;

&lt;p&gt;kube-apiserver, etcd, kube-scheduler, kube-controller-manager, cloud-controller-manager, &lt;/p&gt;

&lt;h4&gt;
  
  
  What is the kube-apiserver?
&lt;/h4&gt;

&lt;p&gt;The kube-apiserver is the main implementation of a Kubernetes API server, which is a component of the Kubernetes control plane.&lt;/p&gt;

&lt;h4&gt;
  
  
  What role does the API server play in the Kubernetes control plane?
&lt;/h4&gt;

&lt;p&gt;The API server is the front end for the Kubernetes control plane and exposes the Kubernetes API.&lt;/p&gt;

&lt;h4&gt;
  
  
  Can you run multiple instances of kube-apiserver?
&lt;/h4&gt;

&lt;p&gt;Yes, you can run several instances of kube-apiserver and balance traffic between those instances.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is etcd in the context of Kubernetes?
&lt;/h4&gt;

&lt;p&gt;etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the kube-scheduler?
&lt;/h4&gt;

&lt;p&gt;kube-scheduler is a control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.&lt;/p&gt;

&lt;h4&gt;
  
  
  What factors are considered in the kube-scheduler's scheduling decisions?
&lt;/h4&gt;

&lt;p&gt;Factors considered include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the kube-controller-manager?
&lt;/h4&gt;

&lt;p&gt;kube-controller-manager is a control plane component that runs controller processes.&lt;/p&gt;

&lt;h4&gt;
  
  
  How are the controllers in kube-controller-manager implemented to reduce complexity
&lt;/h4&gt;

&lt;p&gt;To reduce complexity, all the controllers are compiled into a single binary and run in a single process.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the responsibility of the Node controller in kube-controller-manager?
&lt;/h4&gt;

&lt;p&gt;The Node controller is responsible for noticing and responding when nodes go down.&lt;/p&gt;

&lt;h4&gt;
  
  
  What does the Job controller do?
&lt;/h4&gt;

&lt;p&gt;The Job controller watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the role of the EndpointSlice controller?
&lt;/h4&gt;

&lt;p&gt;The EndpointSlice controller populates EndpointSlice objects, which provide a link between Services and Pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  What function does the ServiceAccount controller perform?
&lt;/h4&gt;

&lt;p&gt;The ServiceAccount controller creates default ServiceAccounts for new namespaces.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the components of a Worker Node?
&lt;/h4&gt;

&lt;p&gt;kubelet, kube-proxy, container runtime&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the kubelet?
&lt;/h4&gt;

&lt;p&gt;The kubelet is an agent that runs on each node in the cluster and ensures that containers are running in a Pod.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the primary function of the kubelet?
&lt;/h4&gt;

&lt;p&gt;The primary function of the kubelet is to take a set of PodSpecs, provided through various mechanisms, and ensure that the containers described in those PodSpecs are running and healthy.&lt;/p&gt;

&lt;h4&gt;
  
  
  Does the kubelet manage all containers on a node?
&lt;/h4&gt;

&lt;p&gt;No, the kubelet does not manage containers that were not created by Kubernetes.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is kube-proxy?
&lt;/h4&gt;

&lt;p&gt;kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the primary role of kube-proxy in a Kubernetes cluster?
&lt;/h4&gt;

&lt;p&gt;The primary role of kube-proxy is to maintain network rules on nodes.&lt;/p&gt;

&lt;h4&gt;
  
  
  How does kube-proxy manage network traffic?
&lt;/h4&gt;

&lt;p&gt;kube-proxy uses the operating system packet filtering layer if it is available. If not, kube-proxy forwards the traffic itself.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is a container runtime in the context of Kubernetes?
&lt;/h4&gt;

&lt;p&gt;A container runtime is a fundamental component that empowers Kubernetes to run containers effectively.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are the responsibilities of a container runtime within the Kubernetes environment?
&lt;/h4&gt;

&lt;p&gt;It is responsible for managing the execution and lifecycle of containers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Does Kubernetes support multiple container runtimes?
&lt;/h4&gt;

&lt;p&gt;Yes, Kubernetes supports container runtimes such as containerd, CRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).&lt;/p&gt;

&lt;h4&gt;
  
  
  What are addons in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features.&lt;/p&gt;

&lt;h4&gt;
  
  
  Where should namespaced resources for addons be located?
&lt;/h4&gt;

&lt;p&gt;Namespaced resources for addons belong within the kube-system namespace as they provide cluster-level features.&lt;/p&gt;

&lt;h4&gt;
  
  
  How can one find more information about available addons?
&lt;/h4&gt;

&lt;p&gt;For an extended list of available addons, you can refer to the section titled "Addons" in the relevant Kubernetes documentation.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>interview</category>
      <category>career</category>
      <category>devops</category>
    </item>
    <item>
      <title>Terraform Expertise: Valuable Takeaways from Years in Production</title>
      <dc:creator>Dmitrii Kotov</dc:creator>
      <pubDate>Mon, 27 Nov 2023 07:34:37 +0000</pubDate>
      <link>https://dev.to/tutunak/terraform-expertise-valuable-takeaways-from-years-in-production-2pb1</link>
      <guid>https://dev.to/tutunak/terraform-expertise-valuable-takeaways-from-years-in-production-2pb1</guid>
      <description>&lt;p&gt;In the article, I want to share my insights on the most important lessons learned from years of using Terraform in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use versioning in remote states
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bnqi2f57gmdac6rn6ms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bnqi2f57gmdac6rn6ms.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
Everyone makes mistakes. Some of them may break a Terraform state, and in such situations, availability of a previous state version can help you to avoid completely reimporting the whole infrastructure. You simply roll back the state to the previous version and resolve all problems in the code or in the infrastructure manually. Often, those issues will be resolved after applying the previous state of the code to this version of state. Moreover, it can help you avoid situations where the state is updated not because of changes but due to the major version update of Terraform, and your pipelines or providers aren't ready yet, or there's a bug, leading you to roll back the state instead of editing it directly. You might have noticed this if you have gone through the process of updating the Terraform version from 0.12.x to 1.2.x&lt;/p&gt;
&lt;h2&gt;
  
  
  Backup your state
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayeh6orrfxk8w7gd4x1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayeh6orrfxk8w7gd4x1l.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
Even if you have state versioning, it's better to have a backup. For example, if you are using an AWS S3 backend, you can replicate the entire bucket containing your state or states to another S3 bucket in a different region. Even with the protection that AWS S3 provides, it's better to have a couple of separate copies of the state for disaster situations.&lt;/p&gt;
&lt;h2&gt;
  
  
  Don't create big states
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fattahq000a2fmg3fi6ah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fattahq000a2fmg3fi6ah.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
Large states are very inconvenient to use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building a plan takes too long.&lt;/li&gt;
&lt;li&gt;Applying changes takes too long.&lt;/li&gt;
&lt;li&gt;Simultaneous work is difficult:

&lt;ul&gt;
&lt;li&gt;Many people want to make changes in one state and have to wait for other people's state plan and apply.&lt;/li&gt;
&lt;li&gt;A broken state paralyzes work with infrastructure as code.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's better to use multiple small states with small amounts of resources. For example: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;State for VPC (Amazon Virtual Private Cloud)&lt;/li&gt;
&lt;li&gt;State for EKS (Amazon Elastic Kubernetes Service)&lt;/li&gt;
&lt;li&gt;State for RDS (Amazon Relational Database Service) instance and infrastructure for it ()&lt;/li&gt;
&lt;li&gt;State for SES (Simple Email Services in AWS)&lt;/li&gt;
&lt;li&gt;etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a state becomes broken, the blast radius will be minimal. Each folder containing Terraform configurations can be tracked separately in pipelines and executed only when there are changes.&lt;br&gt;
This also allows people to avoid using the &lt;code&gt;--target&lt;/code&gt; option during terraform apply. People tend to use this option when they know about the problems in the state. They would rather not spend much time waiting for the pipeline, especially not wanting to see errors caused by resources they haven't touched.&lt;/p&gt;
&lt;h2&gt;
  
  
  Always run Terraform in pipelines, store changes in a version control system
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdobi0kpw55zlptlok4ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdobi0kpw55zlptlok4ae.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
Try to avoid situations where you need to make changes locally and run Terraform on your workstation. Always use pipelines. This approach ensures that all changes (plan/apply outputs) are visible, and if you need to ask for help, it will be best to have all the necessary information in one place.&lt;/p&gt;
&lt;h2&gt;
  
  
  Don't use destroy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mvvrk82y9718apk2scw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mvvrk82y9718apk2scw.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
The opposite of the &lt;code&gt;apply&lt;/code&gt; command in Terraform is &lt;code&gt;destroy&lt;/code&gt;. Supporting the &lt;code&gt;destroy&lt;/code&gt; command in pipelines is a difficult task. This command is rarely used. Instead, when you want to delete the entire infrastructure, particularly with small Terraform states that contain only a few items, simply delete everything that describes resources or calls modules, and leave the &lt;code&gt;.tf&lt;/code&gt; files empty. After git commit and reviewing the plan to verify everything, you can just use the same &lt;code&gt;apply&lt;/code&gt; pipeline. Don't worry about leaving an empty state stored in an S3 bucket; it doesn't take up much space, but it simplifies your workflow and allows you to look back at the history of changes if needed.&lt;/p&gt;
&lt;h2&gt;
  
  
  Don't be afraid to use third-party modules
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lzdzm4dafgjtzis4x6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lzdzm4dafgjtzis4x6k.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
Use already existing Terraform modules rather than writing your own:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; It takes less time; you just need to understand how to use third-party modules, as almost all of them provide examples.&lt;/li&gt;
&lt;li&gt;You don't need to maintain your modules, write tests, or manage the right versioning to enable rolling updates in your infrastructure.&lt;/li&gt;
&lt;li&gt;Using third-party modules and blueprints, you can get help from people who have already encountered problems with them and found solutions.&lt;/li&gt;
&lt;li&gt;Additionally, most modules navigate potential pitfalls and strive for universality; using these modules can offer flexibility in implementing changes or adopting new features.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Automated checks for new versions of modules, providers, and Terraform itself using Renovate
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoaaf2pkxvjfo64kldnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoaaf2pkxvjfo64kldnv.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
As the modern world moves faster and faster, you need to stay up-to-date with your modules, providers, and Terraform versions. This is not only because new features appear and bugs are fixed, but also because many Terraform providers change their APIs, remove deprecated features, and so on. Therefore, you have to be prepared, especially for certain Terraform codes that you run only a few times a year (e.g., for VPC).&lt;br&gt;
In that case, &lt;a href="https://docs.renovatebot.com/" rel="noopener noreferrer"&gt;Renovate&lt;/a&gt; can help you get information about new module and provider updates. For example, I had a case where Terraform stopped working because the AWS provider was deprecated; we were using Terraform version 0.12.21, and the AWS provider version was 2.20.0, with the state containing everything in the AWS region. Consequently, I had to update Terraform to version 1.1.7 and the AWS provider to 4.6.0. Therefore, pay attention to warnings about deprecation; you wouldn't want to find out that you can't use Terraform, especially at the most crucial moment&lt;/p&gt;

&lt;p&gt;There is a good video about using renovate for that purpose:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/l28pukLJvss"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Always check the migration guide for modules and providers when updating to a major version.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pxvl4odzxjspm3459kf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pxvl4odzxjspm3459kf.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
You can avoid many problems by reading migration guides, which are provided by the developers of modules and providers. Even if you are not concerned about infrastructure and recreating resources doesn’t frighten you, the process can halt in the middle of an update, leaving you with inconsistent state and infrastructure. Spending five minutes reading a migration guide can help you avoid hours spent resolving issues.&lt;br&gt;&lt;br&gt;
There is another fundamental way to do it: delete all resources (refer to the section where I discuss not using 'destroy') and the infrastructure, and then recreate it. This approach will work if you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are not concerned about maintaining your existing infrastructure.&lt;/li&gt;
&lt;li&gt;Seek an easier way to update across more than one major version of a module or provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub and Issue Terraform module's and provider's Issue tracker are your best friends
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foukb6d7z1n6uxj5v9q5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foukb6d7z1n6uxj5v9q5g.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
Whether or not you use third-party modules, the issue trackers of providers and modules are the second-best source of information after official documentation. They have often helped me solve issues. This has replaced the need for Googling and reading articles, which usually just provide straightforward solutions. Simply search for your error or issue in the search box, and you'll often find that someone else has already encountered this problem. Even if the issue persists, you might find workarounds that help solve the problem, or gather ideas on what to do next.&lt;br&gt;
Even if you don’t use third-party modules, the source code can still be useful as a source of standard solutions or ideas for writing your own.  Checking it can save you a lot of time, but be mindful of code licenses when copying code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintain an infrastructure managed by Terraform that is as close as possible to the production infrastructure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneteca2yy1yb3x71t6nm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneteca2yy1yb3x71t6nm.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
It's not always possible, but it's better to have a place where you can test your changes. Sometimes, even if your changes seem safe and the Terraform plan appears unsuspicious, they can still cause difficulties. At times, the provider may have a bug, and it's better to discover this by applying changes to a less critical infrastructure rather than risking damage to the production one&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Approaches for Navigating Unclear Documentation in Terraform
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuka2gt2r5vvobzqtako3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuka2gt2r5vvobzqtako3.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
Sometimes the documentation for a module or provider is not obvious, and it's unclear how to achieve specific goals. In that case, you can apply changes manually (remember the previous advice about having a separate infrastructure for tests), and then simply review the Terraform plan. In most cases, you just need to add parameters that are changed in the plan to your Terraform file, and everything will work. However, occasionally, you may need to use the import command. Both scenarios help you to practically understand what exactly happened and how to handle it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnffrm0jm0oq5kb0jmfu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnffrm0jm0oq5kb0jmfu4.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the ever-evolving landscape of infrastructure management, the lessons I learn from experience are invaluable. Throughout this article, we've navigated through a series of crucial practices for using Terraform effectively in production environments. From the importance of versioning and backing up your state, to the wisdom of avoiding large states and incorporating third-party modules, each point underscores a broader principle: the need for vigilance, foresight, and adaptability in managing our digital infrastructure.&lt;/p&gt;

&lt;p&gt;As Terraform continues to evolve, so too must our strategies for leveraging its power. Remember, the goal isn't just to avoid errors but to create a robust, efficient, and scalable infrastructure that can withstand the tests of time and change. Embracing tools like Renovate for staying updated, adhering to migration guides, and keeping an eye on issue trackers are not just best practices, but essential habits for any infrastructure manager seeking excellence.&lt;/p&gt;

&lt;p&gt;Finally, it's important to note that while Terraform provides us with a powerful toolset, the real strength lies in the community and the shared knowledge we gain from each other. Articles, forums, and discussions like this are not just for solving today's challenges but are stepping stones towards a more resilient and dynamic future in infrastructure management.&lt;/p&gt;

&lt;p&gt;As we continue our journey with Terraform, let's carry these lessons forward, always looking for new ways to refine our approach, adapt to new challenges, and share our insights with the community. After all, the most powerful tool in our arsenal is the collective wisdom and experience we share.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>iac</category>
    </item>
    <item>
      <title>Preparing for a DevOps Engineer Interview: A Comprehensive Guide</title>
      <dc:creator>Dmitrii Kotov</dc:creator>
      <pubDate>Sat, 06 May 2023 16:43:09 +0000</pubDate>
      <link>https://dev.to/tutunak/preparing-for-a-devops-engineer-interview-a-comprehensive-guide-26n4</link>
      <guid>https://dev.to/tutunak/preparing-for-a-devops-engineer-interview-a-comprehensive-guide-26n4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Landing a job as a DevOps engineer can be a rewarding and fulfilling experience. However, acing the interview requires thorough preparation, a deep understanding of DevOps principles, and the ability to demonstrate your technical and interpersonal skills. In this guide, we will walk you through the essential steps to prepare for a DevOps engineer interview. We will cover essential topics, provide links to resources, and share tips to help you succeed in your job search.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Understand the DevOps Principles and Practices
&lt;/h3&gt;

&lt;p&gt;First and foremost, you must have a solid understanding of the core principles and practices that underlie the DevOps philosophy. Familiarize yourself with the following concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuous Integration (CI)&lt;/li&gt;
&lt;li&gt;Continuous Deployment (CD)&lt;/li&gt;
&lt;li&gt;Infrastructure as Code (IAC)&lt;/li&gt;
&lt;li&gt;Configuration Management&lt;/li&gt;
&lt;li&gt;Monitoring and Logging&lt;/li&gt;
&lt;li&gt;Containerization and Orchestration&lt;/li&gt;
&lt;li&gt;Cloud Computing Platforms (AWS, Azure, GCP)&lt;/li&gt;
&lt;li&gt;Microservices Architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Resources to learn DevOps principles and practices:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The DevOps Handbook (&lt;a href="https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002" rel="noopener noreferrer"&gt;https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;The Phoenix Project (&lt;a href="https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592" rel="noopener noreferrer"&gt;https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Introduction to DevOps: Transforming and Improving Operations (&lt;a href="https://www.edx.org/course/introduction-to-devops-transforming-and-improving-operations):" rel="noopener noreferrer"&gt;https://www.edx.org/course/introduction-to-devops-transforming-and-improving-operations):&lt;/a&gt; This free online course is offered by the Linux Foundation through edX. It covers the fundamentals of DevOps, including continuous integration, continuous deployment, and infrastructure as code. The course is self-paced and provides a solid foundation in DevOps principles.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Master Relevant Tools and Technologies
&lt;/h3&gt;

&lt;p&gt;As a DevOps engineer, you'll be expected to work with a variety of tools and technologies. Here are some popular ones that you should be familiar with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version Control: Git, GitHub, GitLab, Bitbucket&lt;/li&gt;
&lt;li&gt;CI/CD Tools: Jenkins, Travis CI, CircleCI, GitLab CI/CD, AWS CodePipeline&lt;/li&gt;
&lt;li&gt;Configuration Management: Ansible, Puppet, Chef, SaltStack&lt;/li&gt;
&lt;li&gt;Infrastructure as Code: Terraform, AWS CloudFormation, Azure Resource Manager&lt;/li&gt;
&lt;li&gt;Containerization: Docker, containerd&lt;/li&gt;
&lt;li&gt;Container Orchestration: Kubernetes, Docker Swarm, Amazon ECS&lt;/li&gt;
&lt;li&gt;Monitoring and Logging: Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk&lt;/li&gt;
&lt;li&gt;Cloud Platforms: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Resources to learn tools and technologies:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Git and GitHub (&lt;a href="https://www.codecademy.com/learn/learn-git" rel="noopener noreferrer"&gt;https://www.codecademy.com/learn/learn-git&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Docker Mastery (&lt;a href="https://www.udemy.com/course/docker-mastery/" rel="noopener noreferrer"&gt;https://www.udemy.com/course/docker-mastery/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kubernetes (&lt;a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tutorials/kubernetes-basics/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ansible for the Absolute Beginner (&lt;a href="https://www.udemy.com/course/learn-ansible/" rel="noopener noreferrer"&gt;https://www.udemy.com/course/learn-ansible/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Terraform Getting Started (&lt;a href="https://learn.hashicorp.com/collections/terraform/aws-get-started" rel="noopener noreferrer"&gt;https://learn.hashicorp.com/collections/terraform/aws-get-started&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Brush Up on Programming and Scripting Languages
&lt;/h3&gt;

&lt;p&gt;As a DevOps engineer, you'll need to be proficient in at least one scripting language for automation and configuration tasks. Common languages used in DevOps include Python, Ruby, Bash, and PowerShell. Familiarity with a programming language such as Go or Java can also be advantageous.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resources to learn programming and scripting languages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Learn Python (&lt;a href="https://www.codecademy.com/learn/learn-python" rel="noopener noreferrer"&gt;https://www.codecademy.com/learn/learn-python&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Learn Go (&lt;a href="https://www.codecademy.com/learn/learn-go" rel="noopener noreferrer"&gt;https://www.codecademy.com/learn/learn-go&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Bash Scripting Tutorial (&lt;a href="https://linuxconfig.org/bash-scripting-tutorial-for-beginners" rel="noopener noreferrer"&gt;https://linuxconfig.org/bash-scripting-tutorial-for-beginners&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;PowerShell (&lt;a href="https://docs.microsoft.com/en-us/powershell/scripting/learn/" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/powershell/scripting/learn/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Study System Design and Networking Concepts
&lt;/h4&gt;

&lt;p&gt;A strong understanding of system design and networking concepts is essential for a DevOps engineer. Be prepared to discuss topics such as load balancing, horizontal scaling, caching strategies, fault tolerance, and network protocols during the interview.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resources to learn system design and networking concepts:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;System Design Primer (&lt;a href="https://github.com/donnemartin/system-design-primer" rel="noopener noreferrer"&gt;https://github.com/donnemartin/system-design-primer&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Scalability, Availability, and Stability Patterns (&lt;a href="http://www.slideshare.net/jboner/scalability-availability-stability-patterns" rel="noopener noreferrer"&gt;http://www.slideshare.net/jboner/scalability-availability-stability-patterns&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;High Scalability Blog (&lt;a href="http://highscalability.com/" rel="noopener noreferrer"&gt;http://highscalability.com/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Computer Networking: A Top-Down Approach (&lt;a href="https://www.amazon.com/Computer-Networking-Top-Down-Approach-7th/dp/0133594149" rel="noopener noreferrer"&gt;https://www.amazon.com/Computer-Networking-Top-Down-Approach-7th/dp/0133594149&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Prepare for Behavioral Questions
&lt;/h3&gt;

&lt;p&gt;In addition to technical questions, you should also be prepared for behavioral questions that assess your ability to work in a team, handle challenging situations, and adapt to change. Use the STAR (Situation, Task, Action, and Result) method to structure your answers and provide concrete examples from your past experiences.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resources for behavioral questions:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;25 Behavioral Interview Questions (&lt;a href="https://www.amtec.us.com/blog/behavioral-interview-questions" rel="noopener noreferrer"&gt;https://www.amtec.us.com/blog/behavioral-interview-questions&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;How to Answer Behavioral Interview Questions (&lt;a href="https://www.indeed.com/career-advice/interviewing/how-to-answer-behavioral-interview-questions" rel="noopener noreferrer"&gt;https://www.indeed.com/career-advice/interviewing/how-to-answer-behavioral-interview-questions&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Review Sample DevOps Interview Questions
&lt;/h3&gt;

&lt;p&gt;Familiarize yourself with common DevOps interview questions to help you practice and refine your answers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resources for sample DevOps interview questions:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Top DevOps Interview Questions (&lt;a href="https://www.edureka.co/blog/interview-questions/top-devops-interview-questions/" rel="noopener noreferrer"&gt;https://www.edureka.co/blog/interview-questions/top-devops-interview-questions/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;DevOps Interview Questions and Answers (&lt;a href="https://www.simplilearn.com/tutorials/devops-tutorial/devops-interview-questions" rel="noopener noreferrer"&gt;https://www.simplilearn.com/tutorials/devops-tutorial/devops-interview-questions&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;DevOps Interview Questions (&lt;a href="https://www.interviewbit.com/devops-interview-questions/" rel="noopener noreferrer"&gt;https://www.interviewbit.com/devops-interview-questions/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Top 50+ DevOps Interview Questions in 2023 (&lt;a href="https://intellipaat.com/blog/interview-question/devops-interview-questions/?US" rel="noopener noreferrer"&gt;https://intellipaat.com/blog/interview-question/devops-interview-questions/?US&lt;/a&gt;) &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  7. Practice Coding and Problem Solving
&lt;/h3&gt;

&lt;p&gt;Although DevOps interviews may not focus heavily on coding and algorithms, you may still be asked to solve problems or write code during the interview. Practice your problem-solving skills using platforms like LeetCode, HackerRank, and Codewars.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resources for practicing coding and problem-solving:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;LeetCode (&lt;a href="https://leetcode.com/" rel="noopener noreferrer"&gt;https://leetcode.com/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;HackerRank (&lt;a href="https://www.hackerrank.com/" rel="noopener noreferrer"&gt;https://www.hackerrank.com/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Codewars (&lt;a href="https://www.codewars.com/" rel="noopener noreferrer"&gt;https://www.codewars.com/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  8. Participate in Mock Interviews
&lt;/h3&gt;

&lt;p&gt;Mock interviews are an excellent way to build confidence and improve your interview skills. Enlist the help of friends, colleagues, or online platforms like Pramp to practice answering questions in a realistic interview setting.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resources for mock interviews:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pramp (&lt;a href="https://www.pramp.com/" rel="noopener noreferrer"&gt;https://www.pramp.com/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Gainlo (&lt;a href="http://www.gainlo.co/" rel="noopener noreferrer"&gt;http://www.gainlo.co/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Preparing for a DevOps engineer interview takes time and dedication. By following this comprehensive guide, you'll be well-equipped to tackle the technical and behavioral aspects of the interview process. Remember to review the key DevOps principles, master relevant tools and technologies, and practice your problem-solving and coding skills. Finally, be prepared to demonstrate your knowledge of system design, networking concepts, and Amazon Leadership Principles during the interview. Good luck!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>interview</category>
    </item>
    <item>
      <title>Migrate Molecule for Ansible pipeline from Travis.ci to Github Actions</title>
      <dc:creator>Dmitrii Kotov</dc:creator>
      <pubDate>Sun, 07 Feb 2021 13:09:58 +0000</pubDate>
      <link>https://dev.to/tutunak/migrate-molecule-for-ansible-pipeline-to-github-actions-from-travis-ci-e41</link>
      <guid>https://dev.to/tutunak/migrate-molecule-for-ansible-pipeline-to-github-actions-from-travis-ci-e41</guid>
      <description>&lt;p&gt;I decided to migrate from Travis.ci to GitHub actions because I had spent all credits and couldn't renew them. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8332pfj9k82j9r07c2rm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8332pfj9k82j9r07c2rm.png" alt="Alt Text" width="693" height="402"&gt;&lt;/a&gt;&lt;br&gt;
Before migration my ci/cd configuration look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;language&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python&lt;/span&gt;
&lt;span class="na"&gt;python&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.6'&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MOLECULEW_USE_SYSTEM=true&lt;/span&gt;
  &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Spin off separate builds for each of the following versions of Ansible&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MOLECULEW_ANSIBLE=2.8.18&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MOLECULEW_ANSIBLE=2.9.16&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MOLECULEW_ANSIBLE=2.10.4&lt;/span&gt;

&lt;span class="c1"&gt;# Require Ubuntu 20.04&lt;/span&gt;
&lt;span class="na"&gt;dist&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;focal&lt;/span&gt;

&lt;span class="c1"&gt;# Require Docker&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;

&lt;span class="na"&gt;install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Install dependencies&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./moleculew wrapper-install&lt;/span&gt;

  &lt;span class="c1"&gt;# Display versions&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./moleculew wrapper-versions&lt;/span&gt;

&lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./moleculew test&lt;/span&gt;

&lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;only&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/^(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)([\.\-].*)?$/&lt;/span&gt;

&lt;span class="na"&gt;notifications&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;webhooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://galaxy.ansible.com/api/v1/notifications/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the molecule configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;dependency&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;galaxy&lt;/span&gt;

&lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;

&lt;span class="na"&gt;lint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;set -e&lt;/span&gt;
  &lt;span class="s"&gt;yamllint .&lt;/span&gt;
  &lt;span class="s"&gt;ansible-lint&lt;/span&gt;
  &lt;span class="s"&gt;flake8&lt;/span&gt;
&lt;span class="na"&gt;platforms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-debian-9&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debian:9&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-debian-10&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debian:10&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-ubuntu-16.04&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu:16.04&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-ubuntu-18.04&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu:18.04&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-ubuntu-20.04&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu:20.04&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-centos-7&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;centos:7&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-centos-8&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;centos:8&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-fedora-32&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fedora:32&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-fedora-33&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fedora:33&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-opensuse-15.1&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;opensuse/leap:15.1&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-oh-my-zsh-opensuse-15.2&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;opensuse/leap:15.2&lt;/span&gt;

&lt;span class="na"&gt;provisioner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible&lt;/span&gt;

&lt;span class="na"&gt;verifier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;testinfra&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These files available in my repository and actual for release version &lt;a href="https://github.com/tutunak/ansible-role-oh-my-zsh/releases/tag/2.4.1" rel="noopener noreferrer"&gt;2.4.1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following GitHub Actions setup guide, I've created a file structure for my future pipeline&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  .github/
    workflows/
      ci.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And take like the example one's of &lt;a href="https://github.com/geerlingguy" rel="noopener noreferrer"&gt;geerlingguy&lt;/a&gt; repository I've made a action ci file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CI&lt;/span&gt;
&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;on'&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Molecule&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;ansible&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;2.8.18&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;2.9.16&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;2.10.4&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check out the codebase.&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Python 3.&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;python-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.x'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install test dependencies.&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pip3 install ansible==${{ matrix.ansible }}  molecule[docker] docker testinfra yamllint ansible-lint flake8&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Molecule tests.&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;molecule test&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;PY_COLORS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;
          &lt;span class="na"&gt;ANSIBLE_FORCE_COLOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's look closer at each part of this file. &lt;/p&gt;

&lt;p&gt;First is the configuration that describes when we will be run our pipeline. It'll be run on all pull requests or push to master branch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CI&lt;/span&gt;
&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;on'&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, is the jobs description. In my case, there is only one &lt;code&gt;test&lt;/code&gt;. I gave the name for this job and setup os version &lt;code&gt;ubuntu-latest&lt;/code&gt;. And the matrix part for the strategy is similar to the Travis.ci syntax.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Molecule&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;ansible&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;2.8.18&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;2.9.16&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;2.10.4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check out the codebase.&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Python 3.&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;python-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.x'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install test dependencies.&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pip3 install ansible==${{ matrix.ansible }}  molecule[docker] docker testinfra yamllint ansible-lint flake8&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Molecule tests.&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;molecule test&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;PY_COLORS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;
          &lt;span class="na"&gt;ANSIBLE_FORCE_COLOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step executes the Github action that checkout repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check out the codebase.&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next one installs ansible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Python 3.&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@v2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And step that installs dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install test dependencies.&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pip3 install ansible==${{ matrix.ansible }}  molecule[docker] docker testinfra yamllint ansible-lint flake8&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;ansible==${{ matrix.ansible }}&lt;/code&gt;  - installs different ansible version for each environment.&lt;/p&gt;

&lt;p&gt;The final step runs molecule&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Molecule tests.&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;molecule test&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;PY_COLORS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;
          &lt;span class="na"&gt;ANSIBLE_FORCE_COLOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;PY_COLORS: '1'&lt;/code&gt;: This forces Molecule to use the colorized output in the CI environment. Without it, all the output would be white text on a black background.&lt;br&gt;
&lt;code&gt;ANSIBLE_FORCE_COLOR: '1'&lt;/code&gt;: This does the same thing as PY_COLORS, but for Ansible’s playbook output.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>molecule</category>
      <category>github</category>
      <category>actions</category>
    </item>
  </channel>
</rss>
