<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: EverythingDevOps</title>
    <description>The latest articles on DEV Community by EverythingDevOps (@everythingdevops).</description>
    <link>https://dev.to/everythingdevops</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/everythingdevops"/>
    <language>en</language>
    <item>
      <title>Introduction to Multi-Cluster Deployment in Kubernetes</title>
      <dc:creator>Onyeanuna prince</dc:creator>
      <pubDate>Fri, 09 Aug 2024 09:05:36 +0000</pubDate>
      <link>https://dev.to/everythingdevops/introduction-to-multi-cluster-deployment-in-kubernetes-3ak3</link>
      <guid>https://dev.to/everythingdevops/introduction-to-multi-cluster-deployment-in-kubernetes-3ak3</guid>
      <description>&lt;p&gt;When starting out with &lt;a href="https://everythingdevops.dev/choosing-the-right-tool-for-your-local-kubernetes-development-environment/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, you've probably used a single cluster architecture. This architecture is easy to set up and allows you to enjoy the benefits of scaling and deploying your applications. If you've used this setup for a long time and on much larger projects, then you must have started to notice the cracks in this architecture.&lt;/p&gt;

&lt;p&gt;As your project grows, you'll start to notice specific bottlenecks of a single Kubernetes cluster architecture. These bottlenecks include resource constraints, fault isolation, geographic distribution, and others. However, there's a solution—multi-cluster architecture.&lt;/p&gt;

&lt;p&gt;In this article, you'll understand how multi-cluster architecture in Kubernetes solves these bottlenecks and why it's important. In the end, you'll have the right amount of information to help you decide whether to use a single or multi-cluster architecture in your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Kubernetes multi-clusters?
&lt;/h2&gt;

&lt;p&gt;Multi-clusters in Kubernetes are the deployment of several clusters across different data centres or cloud regions. In terms of your application, it means using more than one Kubernetes cluster to deploy and manage your product.&lt;/p&gt;

&lt;p&gt;A multi-cluster setup can really come in handy when looking to offer optimum uptime. By distributing your application across multiple clusters, you can ensure that your services remain available even if one cluster experiences a failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need multiple clusters?
&lt;/h2&gt;

&lt;p&gt;There are some reasons why most engineers consider using a multi-cluster architecture. Below are some of these reasons:&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability
&lt;/h3&gt;

&lt;p&gt;With multi-cluster setups, you can better distribute your resources across several clusters. This makes it possible to scale your application more effectively to take on higher traffic and prevent any cluster from becoming a bottleneck.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduced blast radius
&lt;/h3&gt;

&lt;p&gt;Blast radius refers to the extent of impact that a failure in one part of a system can have on the rest of the system. It measures how far-reaching the consequences of an incident can be. By relying on a multi-cluster architecture, you ensure that only a limited part of your system gets affected in the event of an outage or security breach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Geographical redundancy and disaster recovery
&lt;/h3&gt;

&lt;p&gt;Deploying your clusters in different geographical regions ensures that your application can withstand regional failures, such as natural disasters or network outages. This geographical spread also ensures quick failover and data recovery from unaffected clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduced latency
&lt;/h3&gt;

&lt;p&gt;When you place several clusters close to end-users, you greatly reduce the time it takes to process requests on your applications. This is particularly beneficial for global applications that serve users from various locations around the world.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation
&lt;/h3&gt;

&lt;p&gt;Rather than using namespaces in a single Kubernetes cluster, multi-clusters can provide better isolation for various development stages (development, testing, production). This isolation method enhances security by reducing the risk of cross-environment contamination and unauthorized access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operational flexibility
&lt;/h3&gt;

&lt;p&gt;When performing maintenance, updates, or any scaling operation, you can decide to carry it out on any one of your clusters. This ensures that you're not affecting the entire system. This flexibility offers smoother operations and less disruption to services.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does a multi-cluster architecture look like?
&lt;/h2&gt;

&lt;p&gt;In practice, let's say you're working on a project, and you want to use a multi-cluster architecture for it. You can use any method of provisioning your Kubernetes clusters, but in this case, you're relying on &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/amazon-elastic-kubernetes-service.html" rel="noopener noreferrer"&gt;Amazon Elastic Kubernetes Service (EKS)&lt;/a&gt; and &lt;a href="https://cloud.google.com/kubernetes-engine?hl=en" rel="noopener noreferrer"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnqjy5bt6101mfrmfwbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnqjy5bt6101mfrmfwbe.png" alt="Multi-cluster architecture between AWS and GCP" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure 1:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;Multi-cluster architecture between AWS and GCP&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;First, you start by provisioning the EKS cluster in &lt;a href="https://everythingdevops.dev/what-is-amazon-resource-name-arn/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt; and the GKE cluster in Google Cloud. Both clusters are fully functional Kubernetes environments set up in their respective cloud providers.&lt;/p&gt;

&lt;p&gt;Next, to enable communication between the clusters, you can set up VPN connections between the AWS VPC and the Google Cloud VPC. This establishes a secure link that allows the clusters to communicate with each other.&lt;/p&gt;

&lt;p&gt;After this, you'll install Cilium as the networking plugin (CNI) on both the EKS and GKE clusters. This involves installing Cilium's agents and configuring the clusters to use Cilium for networking. At this point, Cilium doesn't bother about how both clusters are connected but rather if their endpoints are reachable.&lt;/p&gt;

&lt;p&gt;You can then configure Cilium's cluster mesh feature, which will integrate the clusters into a single, unified network. This involves configuring Cilium to recognize the other cluster and allowing services and pods in the EKS cluster to communicate seamlessly with those in the GKE cluster.&lt;/p&gt;

&lt;p&gt;Finally, you'll deploy applications or services across the clusters and verify that they can interact as expected. With Cilium's cluster mesh, your EKS and GKE clusters are now interconnected, and your multi-cluster setup should work perfectly.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use a multi-cluster architecture
&lt;/h2&gt;

&lt;p&gt;Depending on your organization's goals or your application's requirements, you must consider certain measures when choosing your product's architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  High availability
&lt;/h3&gt;

&lt;p&gt;Let's say the primary goal of your application - as it is with 100% of other products - is to offer maximum uptime and remain operational even during regional outages or disasters.&lt;/p&gt;

&lt;p&gt;In this case, you can consider deploying clusters in multiple geographical regions to ensure that if one cluster goes down, others can take over, providing uninterrupted service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Geographical distribution
&lt;/h3&gt;

&lt;p&gt;If you already have a large organization with a global user base or you're growing quickly, you can minimize latency by serving users based on location.&lt;/p&gt;

&lt;p&gt;By setting up clusters in various regions much closer to your users, you reduce latency and improve the user experience by routing traffic to the cluster closest to them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workload segmentation
&lt;/h3&gt;

&lt;p&gt;If your application has unique requirements for its resources, security, or infrastructure, you can use distinct clusters to cater to the specific needs of different workloads. For instance, you can separate machine learning workloads from standard web applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-cloud strategy
&lt;/h3&gt;

&lt;p&gt;Let's say your organization currently relies on multiple cloud providers for specific reasons, such as preventing vendor lock-in, utilizing specific resources, or ensuring redundancy. You can consider deploying your clusters across the different cloud providers to leverage each provider's best features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we defined a multi-cluster and showed how its architecture works in practice. We also discussed some reasons why you might consider using it and what to look out for when making your choice.&lt;/p&gt;

&lt;p&gt;Always remember that a multi-cluster architecture is handy when you need to ensure high availability, minimize latency, and perform maintenance without downtime.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
      <category>kubernetes</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Monitoring, Observability, and Telemetry Explained</title>
      <dc:creator>Onyeanuna prince</dc:creator>
      <pubDate>Tue, 02 Apr 2024 19:39:44 +0000</pubDate>
      <link>https://dev.to/everythingdevops/monitoring-observability-and-telemetry-explained-220</link>
      <guid>https://dev.to/everythingdevops/monitoring-observability-and-telemetry-explained-220</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/monitoring-observability-and-telemetry-explained/"&gt;EverythingDevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Whenever something goes wrong in a system or application, it affects the customers, the business performance, and of course, the Site Reliability Engineers (SREs) who stay up late to fix it.&lt;/p&gt;

&lt;p&gt;While engineers often use terms/tools such as monitoring, observability, and telemetry interchangeably, they aren't the same things. Monitoring sets triggers and alerts that go off when an event occurs, Telemetry gathers system data, and &lt;a href="https://everythingdevops.dev/what-is-observability/"&gt;Observability&lt;/a&gt; lets you know what's going on in the system.&lt;/p&gt;

&lt;p&gt;The trio is responsible for quickly restoring downtime within organizations; however, to utilize them effectively, you need to understand their significant differences and how they work together. By the end of this article, you will have a clear understanding of how each concept works and how they can be combined to ensure your system is reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Monitoring?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After deploying an application, to ensure that there are little to no downtimes, several post-deployment practices are put in place. One of such practices is monitoring.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.bmc.com/blogs/it-monitoring/"&gt;Monitoring&lt;/a&gt; can be defined as the continuous assessment of a system's health and performance to ensure that it is running as expected. With monitoring, you don't have to wait for a customer to report an issue before you know that something is wrong. Its triggers and alerts are set to go off when an event occurs.&lt;/p&gt;

&lt;p&gt;In monitoring, you define what "normal" means for your system and set up alerts to notify you when the system deviates from that normal state. For example, you could set up an alert to notify you when the CPU usage of your server exceeds 80%. This way, you can take action before the server crashes.&lt;/p&gt;

&lt;p&gt;Monitoring not only gives you an insight into the health of your system but also helps you to identify trends and patterns in the system. This helps you know how the application has been used over time and makes it easier to predict future issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Types of Monitoring&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Although the concept of monitoring stays the same, its application differs. This difference is due to the diverse nature of IT environments, the complexity of systems, varied use cases, and emerging technologies and trends.&lt;/p&gt;

&lt;p&gt;There are different types of monitoring, each designed to monitor a specific aspect of the system. Some of these types include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Performance Monitoring (APM):&lt;/strong&gt; APM focuses on monitoring the performance and availability of software applications. It involves tracking metrics related to response times, throughput, error rates, and resource utilization within the application. &lt;a href="https://newrelic.com/lp/developersignup?utm_medium=cpc&amp;amp;utm_source=google&amp;amp;utm_campaign=EVER-GREEN_BRAND_SEARCH_BRAND_EMEA_MENA_EN&amp;amp;utm_network=g&amp;amp;utm_keyword=new%20relic&amp;amp;utm_device=c&amp;amp;_bt=665547843535&amp;amp;_bm=e&amp;amp;_bn=g&amp;amp;cq_cmp=12301448307&amp;amp;cq_con=117946549135&amp;amp;cq_plac=&amp;amp;gad_source=1&amp;amp;gclid=Cj0KCQiAz8GuBhCxARIsAOpzk8wmVY4Vs4zLCDpkqoN0HX0MoSXvEv2IHgkmY6dA18Gbi6emHTIecCEaAiItEALw_wcB"&gt;APM tools&lt;/a&gt; provide real time insights into application bottlenecks, code inefficiencies, and user experience issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure Monitoring:&lt;/strong&gt; This type of monitoring focuses on the health and performance of physical and virtual infrastructure such as servers, networks, and storage. It includes monitoring the CPU utilization, memory usage, disk space, and network bandwidth to ensure optimal performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Monitoring:&lt;/strong&gt; It involves monitoring the performance of network infrastructure and traffic. It includes the monitoring of devices such as routers, switches, firewalls, and load balancers, as well as the analysis of network traffic patterns, packet loss, latency, and bandwidth utilization. Network monitoring helps ensure optimal network performance, troubleshoot connectivity issues, and detect security threats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log Monitoring:&lt;/strong&gt; Log monitoring involves collecting, analyzing, and correlating log data generated by systems, applications, and devices. It helps organizations track system events, troubleshoot issues, and ensure compliance with regulatory requirements. Log monitoring tools provide centralized log management, real-time alerting, and advanced analytics capabilities to facilitate log analysis and investigation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Monitoring:&lt;/strong&gt; This type of monitoring focuses on detecting and responding to security threats and vulnerabilities. It involves monitoring network traffic, system logs, and user activities to identify suspicious behaviour, unauthorized access, and security breaches. Security monitoring tools provide real-time threat detection, incident response, and security analytics capabilities to help organizations protect their sensitive data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Monitoring:&lt;/strong&gt; Cloud monitoring involves monitoring the performance and availability of cloud-based infrastructure and services. It includes monitoring cloud resources such as virtual machines, storage, databases, and containers, as well as tracking cloud service metrics provided by cloud service providers (CSPs). Cloud monitoring helps organizations optimize cloud resource utilization, manage costs, and ensure service uptime and reliability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End-User Monitoring (EUM):&lt;/strong&gt; This type of monitoring focuses on tracking the experience of end-users interacting with applications and services. It involves measuring metrics such as page load times, transaction completion rates, and user interactions to assess application performance from the end-user perspective. EUM tools provide insights into user behaviour, application performance, and service availability to help organizations deliver a seamless end-user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What's the Problem with Legacy Monitoring?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The term "legacy" refers to outdated systems, technologies, or practices that are no longer efficient or effective. Legacy monitoring is the traditional approach to monitoring that is no longer sufficient for modern IT environments.&lt;/p&gt;

&lt;p&gt;The problem with legacy monitoring is that it is not designed to handle the complexity, scale, and dynamism of modern systems and applications. Legacy monitoring tools are often siloed, inflexible, and unable to provide the level of visibility and insights required to manage modern IT environments effectively.&lt;/p&gt;

&lt;p&gt;Some of the challenges associated with legacy monitoring include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lack of Scalability&lt;/strong&gt;: Legacy monitoring systems often lack scalability to handle the growing volume, variety, and velocity of data generated by modern IT infrastructure and applications. As organizations scale their operations and adopt new technologies, legacy systems may struggle to keep up, leading to performance degradation and gaps in monitoring coverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limited Visibility&lt;/strong&gt;: Legacy monitoring tools provide limited visibility into the complex interactions and dependencies within modern IT ecosystems. They often focus on monitoring individual components in isolation, rather than providing holistic insights into system behavior and performance across the entire technology stack. This limited visibility hampers the ability to detect and diagnose issues that span multiple layers of the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limited support and updates&lt;/strong&gt;: Legacy monitoring systems are often inflexible and difficult to adapt to changing business requirements, technologies, and use cases. They may lack support for modern deployment models such as cloud computing, containerization, and microservices, making it challenging to monitor dynamically orchestrated environments effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inadequate Security Monitoring&lt;/strong&gt;: Legacy monitoring systems may lack robust security monitoring capabilities to detect and respond to cybersecurity threats effectively. As cyberattacks become more sophisticated and targeted, organizations require advanced security monitoring tools that can analyze large volumes of security data in real time and provide actionable insights to mitigate risks.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Historical Overview of Monitoring and Observability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Since the advent of computing technology and IT management practices over the past decade, monitoring and observability have also evolved. During the early days of computing, primitive monitoring tools focused on hardware components, paving the way for more sophisticated monitoring tools during the mainframe era.&lt;/p&gt;

&lt;p&gt;With the popularity of computer networks, network management protocols like &lt;a href="https://en.wikipedia.org/wiki/Simple_Network_Management_Protocol"&gt;SNMP&lt;/a&gt; became available, enabling administrators to monitor and manage network resources. The growth of enterprise IT environments, platforms like &lt;a href="https://en.wikipedia.org/wiki/HP_OpenView"&gt;HP OpenView&lt;/a&gt; and &lt;a href="https://www.ibm.com/docs/en/tivoli-monitoring/6.3.0?topic=introduction-overview-tivoli-monitoring"&gt;IBM Tivoli&lt;/a&gt; emerged, providing centralized monitoring and management capabilities for diverse infrastructures.&lt;/p&gt;

&lt;p&gt;In the 2000s, as web-based applications and distributed architectures became increasingly popular, &lt;a href="https://www.dynatrace.com/news/blog/what-is-apm-2/"&gt;Application Performance Management (APM)&lt;/a&gt; solutions emerged, monitoring software applications and user experiences.&lt;/p&gt;

&lt;p&gt;In 2010, cloud computing and DevOps practices revolutionized IT operations, introducing new challenges and opportunities for monitoring and observability. The development of cloud-native monitoring and DevOps tools enabled organizations to monitor dynamic environments using features such as infrastructure-as-code and containerization. Additionally, observability emerged, emphasizing the importance of understanding system behaviour from rich telemetry and contextual information.&lt;/p&gt;

&lt;p&gt;In this way, observability platforms provided insights into complex systems that allowed troubleshooting, debugging, and optimization to be easily accomplished. From early hardware monitoring to modern observability platforms, IT infrastructure and applications have been continually monitored and controlled to gain greater visibility, insight, and control.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Observability?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Observability refers to the ability to understand and analyze the internal state of a system based on its external outputs or telemetry data. Observable systems allow users to find the root cause of a performance issue by inspecting the data they generate.&lt;/p&gt;

&lt;p&gt;By implementing telemetry-producing tools in systems, you can gather comprehensive data about the system's interactions, dependencies, and performance. Through correlation, analysis, visualization, and automation, observability platforms provide meaningful insights based on this data.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Differences between Observability and Monitoring&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The distinction between observability and monitoring lies in their focus, approach, and objectives within IT operations.&lt;/p&gt;

&lt;p&gt;Monitoring primarily involves the systematic continuous collection and analysis of predefined metrics and thresholds to track the health, performance, and availability of systems, applications, or services. It operates on the principle of setting triggers and alerts to signal deviations from expected behaviour, enabling reactive management and troubleshooting.&lt;/p&gt;

&lt;p&gt;Observability, on the other hand, shifts the focus from predefined metrics to gaining a deeper understanding of system behaviour and interactions. It emphasizes the need for rich, contextual data and insights into the internal state of a system, captured through telemetry data. Observability enables organizations to explore and analyze system behaviour in real time, facilitating troubleshooting, debugging, and optimization efforts.&lt;/p&gt;

&lt;p&gt;While monitoring focuses on predefined metrics to track system health and performance, observability emphasizes the need for telemetry data of the system's behaviour to facilitate deeper understanding and analysis. Monitoring is reactive, while observability is proactive, empowering organizations to detect, diagnose, and resolve issues more effectively in complex distributed systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Brief Intro to Telemetry Data&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.splunk.com/en_us/blog/learn/what-is-telemetry.html"&gt;Telemetry&lt;/a&gt; data refers to the collection of measurements or data points obtained from remote or inaccessible systems, devices, or processes. This data typically includes information about the operational status, performance metrics, logs, and behaviour of these systems.&lt;/p&gt;

&lt;p&gt;The role of telemetry data is to provide insights into how systems operate, how they interact with each other, and how they perform over time. By analyzing telemetry data, organizations can gain a deeper understanding of their systems, identify patterns, detect anomalies, and make informed decisions to optimize performance, enhance reliability, and troubleshoot issues effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Comparisons and Relationships&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Monitoring focuses on tracking the health and performance of systems through predefined thresholds. It typically involves collecting metrics related to system resource utilization, response times, error rates, and other key performance indicators (KPIs). Monitoring provides a high-level view of system health and performance, alerting operators to potential issues or deviations from expected behaviour.&lt;/p&gt;

&lt;p&gt;Telemetry focuses on collecting data such as logs, metrics, and traces from systems, especially in dynamic cloud environments. While telemetry provides robust data collection and standardization, it lacks the deep insights needed for quick issue resolution.&lt;/p&gt;

&lt;p&gt;Observability goes beyond telemetry by offering analysis and insights into why issues occur. It provides detailed, granular views of events in an IT environment, enabling custom troubleshooting and root cause analysis.&lt;/p&gt;

&lt;p&gt;APM, while similar to observability, focuses specifically on tracking end-to-end transactions within applications. It offers high-level monitoring of application performance but may not provide the technical details needed for deep analysis.&lt;/p&gt;

&lt;p&gt;Overall, telemetry and APM provide monitoring capabilities while observability offers deeper insights into system behaviour and performance, enabling effective troubleshooting and analysis across IT infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Transition from Monitoring to Observability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Imagine a company that operates an e-commerce platform. In the past, they could rely solely on traditional monitoring tools to track the health and performance of their system. These tools would provide the basic metrics such as CPU usage, memory utilization, and response times for their web servers and databases.&lt;/p&gt;

&lt;p&gt;However, as their platform grows in complexity and scale, they might frequently encounter issues that traditional monitoring tools can no longer address adequately. For example, during a high-traffic event like a flash sale, they could notice occasional spikes in latency and occasional errors occurring in their checkout process. While they can see these issues occurring, they will struggle to pinpoint the root cause using traditional monitoring alone.&lt;/p&gt;

&lt;p&gt;To address these challenges, this company must adopt observability practices. By implementing telemetry solutions that collect a broader range of data, including detailed logs, &lt;a href="https://aws.amazon.com/what-is/distributed-tracing/#:~:text=A%20distributed%20trace%20represents%20the,the%20initial%20request%20interacts%20with."&gt;distributed traces&lt;/a&gt;, and custom application metrics. With this richer dataset, they'll gain deeper insights into their system's behaviour and performance.&lt;/p&gt;

&lt;p&gt;They can now trace individual requests as they flow through their microservices architecture, allowing them to identify bottlenecks and latency issues more effectively. They can also correlate application logs with system metrics to understand how the changes in application code affect the overall system health.&lt;/p&gt;

&lt;p&gt;By transitioning to observability, this company gains a more comprehensive understanding of its system's behaviour and performance. They can proactively identify and resolve issues, leading to improved reliability and a better user experience for their customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Choosing the Right Tool&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Choosing the right observability tool is crucial for ensuring the reliability of modern software systems. Below are key factors to consider when selecting an observability tool for your organization:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Collection:&lt;/strong&gt; Look for a tool capable of collecting and aggregating data from various sources like logs, metrics, and traces - &lt;a href="https://www.techtarget.com/searchitoperations/tip/The-3-pillars-of-observability-Logs-metrics-and-traces"&gt;the three pillars of observability&lt;/a&gt;. Before choosing a tool, ask yourself, "Can this tool handle the diverse data sources present in our infrastructure, and can it process high volumes of data in real time?" Example tools include &lt;a href="https://www.datadoghq.com/"&gt;Datadog&lt;/a&gt;, &lt;a href="https://www.splunk.com/"&gt;Splunk&lt;/a&gt;, and &lt;a href="https://newrelic.com/lp/developersignup?utm_medium=cpc&amp;amp;utm_source=google&amp;amp;utm_campaign=EVER-GREEN_BRAND_SEARCH_BRAND_EMEA_MENA_EN&amp;amp;utm_network=g&amp;amp;utm_keyword=new%20relic&amp;amp;utm_device=c&amp;amp;_bt=665547843535&amp;amp;_bm=e&amp;amp;_bn=g&amp;amp;cq_cmp=12301448307&amp;amp;cq_con=117946549135&amp;amp;cq_plac=&amp;amp;gad_source=1&amp;amp;gclid=CjwKCAiAibeuBhAAEiwAiXBoJG1KiG6xQarGQP4FS2TsxQjkF-c1EyG2FJ9Jp4hQMwR04LepDRoExxoCvEgQAvD_BwE"&gt;New Relic&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visualization and Analysis:&lt;/strong&gt; Choose a tool with intuitive and customizable dashboards, charts, and visualizations. A question to ask is, "Are the visualization features of this tool user-friendly and adaptable to our team's specific needs?" Tools like &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt; and &lt;a href="https://www.elastic.co/kibana"&gt;Kibana&lt;/a&gt; provide powerful visualization capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alerting and Notification:&lt;/strong&gt; Select a tool with flexible alerting mechanisms to proactively detect anomalies or deviations from defined thresholds. Consider asking questions like "Does this tool offer customizable alerting options and support notification channels that suit our team's communication preferences?" A tool like &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; provides robust alerting capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability and Performance:&lt;/strong&gt; Consider the scalability and performance capabilities, especially if your organization handles large volumes of data or has a growing user base. Ask yourself, "Is this tool capable of scaling with our organization's growth and maintaining performance under increased data loads?" You can utilize a tool like &lt;a href="https://www.influxdata.com/"&gt;InfluxDB&lt;/a&gt; for scalable time-series data storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration and Compatibility:&lt;/strong&gt; Assess compatibility with your existing software stack, including infrastructure, frameworks, and programming languages. To make this assessment, ask yourself, "Does this tool seamlessly integrate with our current technologies and support the platforms we use, such as Kubernetes or cloud providers?" Most observability tools are open-source and support a wide range of integrations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extensibility and Customization:&lt;/strong&gt; Evaluate the tool's extensibility to incorporate custom instrumentation and adapt to specific application requirements. Ask yourself, "Can we easily extend the functionality of this tool through custom integrations and configurations to meet our unique monitoring needs?"&lt;/p&gt;

&lt;p&gt;By considering these questions, you can make a more informed decision when selecting an observability tool for your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This article has explored the vital aspects of observability and monitoring in modern IT operations. It covered the role of telemetry data, compared observability with related concepts, illustrated the transition from traditional monitoring to observability, and discussed key factors in selecting observability tools.&lt;/p&gt;

&lt;p&gt;Throughout, we've emphasized the importance of balancing monitoring and observability for efficient IT operations, highlighting how a comprehensive approach can provide deep insights and enhance system reliability in today's dynamic digital landscape.&lt;/p&gt;

&lt;p&gt;By understanding the differences between monitoring, observability, and telemetry, and by leveraging the right tools and practices, DevOps teams can gain a deeper understanding of their systems, proactively identify and resolve issues, and optimize performance to deliver a seamless user experience.&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>devops</category>
      <category>observability</category>
    </item>
    <item>
      <title>Taking Backup of your Kubernetes etcd Data: A step-by-step guide</title>
      <dc:creator>omkar kulkarni</dc:creator>
      <pubDate>Wed, 01 Nov 2023 14:35:32 +0000</pubDate>
      <link>https://dev.to/everythingdevops/taking-backup-of-your-kubernetes-etcd-data-a-step-by-step-guide-4dpa</link>
      <guid>https://dev.to/everythingdevops/taking-backup-of-your-kubernetes-etcd-data-a-step-by-step-guide-4dpa</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/backup-kubernetes-etcd-data/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the ever-evolving landscape of container orchestration, Kubernetes (K8s) has emerged as the gold standard for managing and scaling containerized applications. At the heart of every K8s cluster lies a critical component known as &lt;a href="https://etcd.io/" rel="noopener noreferrer"&gt;etcd&lt;/a&gt;. etcd is a distributed key-value store that stores and manages all of the K8s' configuration data, ensuring the system's reliability and consistency.&lt;/p&gt;

&lt;p&gt;While K8s provides a robust platform for deploying and managing applications, the need to safeguard the etcd data cannot be overstated. This is where the importance of taking regular backups comes into play.&lt;/p&gt;

&lt;p&gt;In this article, we'll dive into the essential part of etcd backup in Kubernetes, understanding why it's crucial for the stability and recoverability of your cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Relationship between Kubernetes and etcd&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At the core of Kubernetes, etcd — an open-source distributed key-value store that acts as Kubernetes' primary database for storing configuration data and ensuring cluster consistency.&lt;br&gt;
Etcd serves as the single source of truth, storing information about the cluster's state, configuration, and secrets. Kubernetes components, including the API server, controller manager, and scheduler, rely heavily on etcd to synchronize and manage containerized workloads across the cluster.&lt;/p&gt;

&lt;p&gt;This tight integration makes etcd indispensable in maintaining the stability and reliability of a Kubernetes cluster, underlining the need for regular backups to safeguard this vital component.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why is it crucial to take a backup of your Kubernetes cluster?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Taking regular backups of etcd in the Kubernetes cluster is crucial for several reasons, as it ensures the reliability, recoverability, and security of your K8s cluster. Here are key points explaining why regular etcd backups are essential:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Recovery&lt;/strong&gt;: In the event of data loss or cluster-wide failures, etcd backups serve as a lifeline to restore your K8s cluster to a previously known state. This minimizes downtime and ensures business continuity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration History&lt;/strong&gt;: Etcd stores the entire configuration history of your K8s cluster. Regular backups provide a historical record of changes, enabling you to trace and understand configuration modifications and troubleshoot issues over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback and Versioning&lt;/strong&gt;: Etcd backups enable you to roll back to previous cluster configurations or versions, which is essential for testing new configurations or reverting to a stable state in case of issues with updates or changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before you learn how to take a backup of the etcd cluster, ensure you have the following prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes Cluster using &lt;a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/" rel="noopener noreferrer"&gt;Kubeadm&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;An &lt;code&gt;etcd&lt;/code&gt; server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For demo purposes, I used the &lt;a href="https://killercoda.com/playgrounds/scenario/kubernetes" rel="noopener noreferrer"&gt;Killerkoda Kubernetes playground&lt;/a&gt;.&lt;br&gt;
To communicate with etcd, you’ll need &lt;a href="https://etcd.io/docs/v3.5/dev-guide/interacting_v3/" rel="noopener noreferrer"&gt;etcdctl&lt;/a&gt;, a command line utility for communicating with the etcd database, as it comes with the Kubeadm cluster by default.&lt;br&gt;
etcdctl supports two versions of the etcd server's API. When making server calls, it defaults to version 2 of the API. In version 2, some operations are either undefined or have different arguments.&lt;br&gt;
Next, you will tell &lt;code&gt;etcdctl&lt;/code&gt; to use the V3 API, which is required for the snapshot functionality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up&lt;/strong&gt; &lt;code&gt;**ETCDCTL_API**&lt;/code&gt; &lt;strong&gt;to VERSION 3&lt;/strong&gt;&lt;br&gt;
To make &lt;code&gt;etcdctl&lt;/code&gt; use the V3 API; you can either set the environment variable with each call as in the following commands.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ETCDCTL_API=3 etcdctl snapshot save ...  
$ ETCDCTL_API=3 etcdctl snapshot restore ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;or the entire terminal session.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export ETCDCTL_API=3
$ etcdctl snapshot save ...
$ etcdctl snapshot restore ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;How to Backup your Kubernetes etcd Data&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To take a backup of the etcd database, you run the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ etcdctl snapshot save
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For executing this operation, you’ll need a few flags (arguments)  of certificates, which are mandatory for verification of the etcd server. This is because you must authenticate with the etcd server before it will expose its sensitive data. The authentication scheme is called &lt;a href="https://www.cloudflare.com/en-gb/learning/access-management/what-is-mutual-tls/" rel="noopener noreferrer"&gt;Mutual TLS (mTLS)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about the flags, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ etcdctl snapshot save -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The output of the above command should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694419531046_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694419531046_image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll need 4 important arguments to successfully backup etcd:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;--cacert&lt;/li&gt;
&lt;li&gt;--cert&lt;/li&gt;
&lt;li&gt;--key&lt;/li&gt;
&lt;li&gt;--endpoints (Optional)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s look into these arguments, what they are, and why you should pass them.&lt;br&gt;
&lt;strong&gt;1. --cacert&lt;/strong&gt;&lt;br&gt;
This provides the path to the Certificate Authority (CA). The CA certificate is used to verify the authenticity of the TLS certificate sent to &lt;code&gt;etcdctl&lt;/code&gt; by the etcd server. The server's certificate found must be signed by the CA. Creating the CA is one of the tasks you need to do when building a cluster. Kubeadm does it automatically.&lt;br&gt;
&lt;strong&gt;2. --cert&lt;/strong&gt;&lt;br&gt;
This is the path to the TLS certificate that &lt;code&gt;etcdctl&lt;/code&gt; sends to the etcd server. The etcd server will verify that this certificate is also signed by the same CA. Certificates of this type contain a &lt;strong&gt;&lt;em&gt;public key&lt;/em&gt;&lt;/strong&gt; that can be used to encrypt data. The public key is used by the server to encrypt data being sent back to &lt;code&gt;etcdctl&lt;/code&gt; during the authentication steps.&lt;br&gt;
&lt;strong&gt;3. --key&lt;/strong&gt;&lt;br&gt;
This is the path to the private key that is used to decrypt data sent to &lt;code&gt;etcdctl&lt;/code&gt; by the etcd server during the authentication steps. The key is &lt;em&gt;only&lt;/em&gt; used by the &lt;code&gt;etcdctl&lt;/code&gt; process. It is never sent to the server.&lt;br&gt;
&lt;strong&gt;4. --endpoints (optional)&lt;/strong&gt;&lt;br&gt;
The &lt;code&gt;--endpoints&lt;/code&gt; argument on &lt;code&gt;etcdctl&lt;/code&gt; is used to tell it where to find the etcd server. If you are running the command on the same host where etcd service is running &lt;em&gt;and&lt;/em&gt; there is only one instance of etcd, then you do not need to provide this argument, as it has a default value of &lt;code&gt;https://127.0.0.1:2379&lt;/code&gt;.&lt;br&gt;
If your etcd service is running on the different port you need to provide that different port number instead of &lt;code&gt;2379&lt;/code&gt;  -  &lt;code&gt;https://127.0.0.1:port&lt;/code&gt;&lt;br&gt;
OR&lt;br&gt;
If your etcd service is running on the remote host then you need to pass  -&lt;br&gt;
&lt;code&gt;--endpoints https://host-ip:port&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where to find the values of these arguments?&lt;/strong&gt;&lt;br&gt;
As etcd is running as a pod in the Kubernetes namespace called &lt;code&gt;kube-system&lt;/code&gt;. You can describe the same pod, and you will able to see all the arguments and their values.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe -n kube-system pod etcd-controlplane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694420994895_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694420994895_image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As this contains a lot of information that we don't need right now, we can use &lt;code&gt;grep&lt;/code&gt; command to extract only what we need.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe -n kube-system pod etcd-controlplane | grep -i file 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694421202409_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694421202409_image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can observe here the path of these all certificates is at the location &lt;code&gt;/etc/kubernetes/pki/etcd&lt;/code&gt; so you can find them as well from controlplane node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Final backup command will be:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ETCDCTL_API=3 etcdctl snapshot save \
      --cacert /etc/kubernetes/pki/etcd/ca.crt \
      --cert /etc/kubernetes/pki/etcd/server.crt \
      --key /etc/kubernetes/pki/etcd/server.key \
      /opt/etcd-backup.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;/opt/etcd-backup.db&lt;/code&gt; is the path for storing etcd backup data.&lt;br&gt;
You should see output similar to this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694422240643_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694422240643_image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Restoring from a backup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Normally you will restore this to another directory, and then point the &lt;code&gt;etcd&lt;/code&gt; service at the new location. For restores, the certificate and endpoints arguments are not required, as we are doing creating files in directories and not talking to the &lt;code&gt;etcd&lt;/code&gt; API, so the only argument required is &lt;code&gt;--data-dir&lt;/code&gt; to tell &lt;code&gt;etcdctl&lt;/code&gt; where to put the restored files.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ etcdctl snapshot restore -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694422587581_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694422587581_image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can pass any value as the path to the argument &lt;code&gt;-- data-dir&lt;/code&gt; .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;The final restore command will be:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ETCDCTL_API=3 etcdctl snapshot restore \
      --data-dir /var/lib/etcd-from-backup \
      /opt/etcd-backup.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will output the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694422840813_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_5A155DA51EAA199DDD1E46BA3F8D568BE2AB2C2BAED904AC0571647D68BA185D_1694422840813_image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;This article described how you can take a backup of etcd in the Kubernetes cluster and restore it safely to avoid data loss and cluster-wide failures.&lt;br&gt;
There is much more to learn about Kubernetes and etcd. Check out the following resources to explore more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://etcd.io/docs" rel="noopener noreferrer"&gt;etcd documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/search/?q=etcd" rel="noopener noreferrer"&gt;Kubernetes resources on etcd&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>etcd</category>
      <category>database</category>
    </item>
    <item>
      <title>Excited to share my latest article on Kubernetes Server and Clients Certificate</title>
      <dc:creator>Iribhogbe ehis</dc:creator>
      <pubDate>Thu, 26 Oct 2023 09:30:39 +0000</pubDate>
      <link>https://dev.to/everythingdevops/excited-to-share-my-latest-article-on-kubernetes-server-and-clients-certificate-cem</link>
      <guid>https://dev.to/everythingdevops/excited-to-share-my-latest-article-on-kubernetes-server-and-clients-certificate-cem</guid>
      <description>&lt;p&gt;I am very excited to share my latest article on Kubernetes Server and Clients Certificate! 🌟&lt;/p&gt;

&lt;p&gt;In this comprehensive guide, I delve into the intricate world of securing your Kubernetes environment. From understanding the fundamental concepts to implementing robust server and client certificates, this article serves as your ultimate go-to resource for enhancing the security of your Kubernetes setup.&lt;/p&gt;

&lt;p&gt;Additionally, you'll gain insights into the various server and client certificates utilized by Kubernetes components and you will also learn how these certificates are generated, signed, and distributed.&lt;/p&gt;

&lt;p&gt;Check it out here: &lt;br&gt;
&lt;u&gt;[&lt;a href="https://everythingdevops.dev/securing-your-kubernetes-environment-a-comprehensive-guide-to-server-and-client-certificates-in-kubernetes/"&gt;https://everythingdevops.dev/securing-your-kubernetes-environment-a-comprehensive-guide-to-server-and-client-certificates-in-kubernetes/&lt;/a&gt;]&lt;/u&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>devops</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Deploying a Database Cluster on DigitalOcean using Pulumi</title>
      <dc:creator>s1ntaxe770r</dc:creator>
      <pubDate>Sat, 19 Aug 2023 00:24:15 +0000</pubDate>
      <link>https://dev.to/everythingdevops/deploying-a-database-cluster-on-digitalocean-using-pulumi-4dgc</link>
      <guid>https://dev.to/everythingdevops/deploying-a-database-cluster-on-digitalocean-using-pulumi-4dgc</guid>
      <description>&lt;p&gt;&lt;a href="https://everythingdevops.dev/deploying-a-database-cluster-on-digitalocean-using-pulumi/"&gt;This article was originally posted on EverythingDevOps&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pulumi is an open source infrastructure as code (IaC) tool that allows you to define and manage cloud resources using popular languages such as Golang, Python, Typescript, and a &lt;a href="https://www.pulumi.com/docs/intro/languages/"&gt;few others&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Pulumi is often compared to Terraform, which is another infrastructure as code tool that allows users to &lt;a href="https://www.techtarget.com/searchitoperations/news/2240187079/Declarative-vs-imperative-The-DevOps-automation-debate"&gt;declaratively&lt;/a&gt; manage infrastructure using &lt;a href="https://www.terraform.io/language"&gt;HashiCorp Configuration Language&lt;/a&gt;(HCL). The key difference here Is Pulumi allows you to manage your infrastructure using one of their supported SDKs in your language of choice.&lt;/p&gt;

&lt;p&gt;In this guide, you would be using Typescript to deploy a PostgreSQL database cluster on DigitalOcean, as such, this guide assumes some familiarity with Typescript.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Pulumi?
&lt;/h2&gt;

&lt;p&gt;Unlike most infrastructure as code tools, Pulumi allows you to define your infrastructure using a general-purpose programming language, this allows your code to be tested much more easily. If you are familiar with Terraform you’d agree that not many testing frameworks exist for it. In comparison to a language like Python, where numerous testing frameworks exist.&lt;/p&gt;

&lt;p&gt;Another advantage lies in the use of a general-purpose programming language, most developers would find using their favorite language more intuitive than a DSL (Domain-Specific Language*&lt;em&gt;)&lt;/em&gt;* such as HCL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow along in this tutorial you would need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.digitalocean.com/registrations/new"&gt;A DigitalOcean account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.digitalocean.com/reference/api/create-personal-access-token/"&gt;DigitalOcean API token&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pulumi.com/docs/get-started/install/"&gt;Pulumi CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/download/"&gt;Node.js&lt;/a&gt;.
## Initializing a Pulumi project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Pulumi CLI provides a command for scaffolding new projects. To do this, run the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir postgres-db &amp;amp;&amp;amp; cd postgres-db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: It’s important the directory is empty else, the next command will return an error.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pulumi new typescript -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will show an output showing Pulumi initialization, as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tSSPjG3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045028112_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tSSPjG3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045028112_image.png" alt="" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This generates a new Pulumi project along with a &lt;a href="https://www.pulumi.com/docs/intro/concepts/stack/"&gt;stack&lt;/a&gt;, a stack is an Independent instance of a Pulumi program, and each stack is usually used to represent a separate environment (production, development, or staging). This is similar to &lt;a href="https://www.terraform.io/language/state/workspaces"&gt;workspaces&lt;/a&gt; if you are familiar with Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Dependencies
&lt;/h2&gt;

&lt;p&gt;To interact with DigitalOcean resources you would need to install the DigitalOcean Pulumi package:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm install @pulumi/digitalocean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xWJjlddS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045124168_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xWJjlddS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045124168_image.png" alt="" width="648" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, set your DigitalOcean API key:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi config set digitalocean:token YOUR_DO_API_KEY --secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AOIpASXm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045233381_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AOIpASXm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045233381_image.png" alt="" width="782" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This would store your API key so Pulumi can authenticate against the DigitalOcean API, the &lt;code&gt;--secret&lt;/code&gt;flag ensures the value passed in is encrypted.&lt;/p&gt;

&lt;p&gt;You could also set your token using an environment variable:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export DIGITALOCEAN_TOKEN=YOUR-DO-API-KEY 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Provisioning a Database Cluster
&lt;/h2&gt;

&lt;p&gt;To get started open &lt;code&gt;index.ts&lt;/code&gt; and follow along with the code below:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as pulumi from "@pulumi/pulumi";
import * as digitalocean from "@pulumi/digitalocean";

const pg_cluster = new digitalocean.DatabaseCluster("pulumi-experiments", {
    engine: "pg",
    nodeCount: 2,
    region: "nyc1",
    size: "db-s-2vcpu-4gb",
    version: "12",
});

export const db_uri = pg_cluster.uri;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above code creates a PostgreSQL database cluster with 2 nodes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;engine&lt;/code&gt; allows you to specify the database cluster you want to create. In this case, it’s PostgreSQL. However, DigitalOcean supports a few other database types, see &lt;a href="https://www.pulumi.com/registry/packages/digitalocean/api-docs/databasecluster/"&gt;this&lt;/a&gt; section of the Pulumi documentation for more information.&lt;/li&gt;
&lt;li&gt;Another important field to note is &lt;code&gt;size&lt;/code&gt;, this allows you to configure the size of each node, see this section of the Pulumi documentation for all valid &lt;a href="https://www.pulumi.com/registry/packages/digitalocean/api-docs/databasecluster/#databaseslug"&gt;database slugs&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;version&lt;/code&gt; allows you to specify what version of PostgreSQL you would like to run. &lt;a href="https://docs.digitalocean.com/products/databases/postgresql/#postgresql-limits"&gt;Here&lt;/a&gt; is a list of all supported PostgreSQL versions on DigitalOcean.&lt;/li&gt;
&lt;li&gt;Finally, you export the database connection URI so you can connect to it using your client of choice.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To deploy the cluster, run the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In a few minutes, you should have a cluster up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vRwa5O5M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690048863254_Screenshot%2B2023-07-22%2Bat%2B6.30.04%2BPM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vRwa5O5M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690048863254_Screenshot%2B2023-07-22%2Bat%2B6.30.04%2BPM.png" alt="" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice how &lt;code&gt;db_uri&lt;/code&gt; is marked as “secret”, this is because it is a sensitive value, to output your database URI run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi stack output db_uri --show-secrets &amp;gt; pass.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above would write the database URI to a file called &lt;code&gt;pass.db&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating your Infrastructure
&lt;/h2&gt;

&lt;p&gt;Now that you have a database cluster, chances are you are not going to stop here, and you would want to update your infrastructure. You can do this by adding a user to the newly created cluster.&lt;/p&gt;

&lt;p&gt;It's good practice to create a separate user to avoid using the admin user, update &lt;code&gt;index.ts&lt;/code&gt; with the following code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as pulumi from "@pulumi/pulumi";
import * as digitalocean from "@pulumi/digitalocean";

const pg_cluster = new digitalocean.DatabaseCluster("pulumi-experiments", {
    engine: "pg",
    nodeCount: 2,
    region: "nyc1",
    size: "db-s-2vcpu-4gb",
    version: "12",
});

const pg_user = new digitalocean.DatabaseUser("non-admin",{clusterId:pg_cluster.id})

export const db_uri = pg_cluster.uri;
export const pg_user_pass = pg_user.password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here you create a new instance of &lt;code&gt;digitalocean.DatabaseUser&lt;/code&gt;, pass in the &lt;code&gt;cluster&lt;/code&gt;&lt;code&gt;I&lt;/code&gt;&lt;code&gt;d&lt;/code&gt; of &lt;code&gt;pg_cluster&lt;/code&gt; and export the new user’s password as we did with the database URI.&lt;/p&gt;

&lt;p&gt;To see what changes would be applied, run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi preview 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The output should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kGA5v57t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690048975818_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kGA5v57t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690048975818_image.png" alt="" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you are satisfied with the changes, go ahead and apply the changes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi up 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once this completes, you should have a new database user called &lt;code&gt;non-admin&lt;/code&gt;, you can output the password by running.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi stack output pg_user_pass --show-secrets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Clean up (Optional)
&lt;/h2&gt;

&lt;p&gt;To tear down the resources you just created, run the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FCGuT6au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690049845201_Screenshot%2B2023-07-22%2Bat%2B7.15.22%2BPM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FCGuT6au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690049845201_Screenshot%2B2023-07-22%2Bat%2B7.15.22%2BPM.png" alt="" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, you deployed a PostgreSQL database cluster using Pulumi and updated it with a non-admin user. However, this is one of several services that you could potentially deploy using Pulumi. Hopefully, this was enough to get you started with Pulumi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Check out &lt;a href="https://www.civo.com/learn/kubernetes-clusters-using-the-civo-pulumi-provider"&gt;this guide&lt;/a&gt; on how to deploy a Kubernetes cluster using Pulumi&lt;/li&gt;
&lt;li&gt;Take a look at this section of the &lt;a href="https://www.pulumi.com/registry/packages/digitalocean/api-docs/"&gt;Pulumi registry&lt;/a&gt; for a full list of supported services&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>infrastructureascode</category>
      <category>pulumi</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Understanding Docker Architecture: A Beginner's Guide to How Docker Works</title>
      <dc:creator>Onyeanuna prince</dc:creator>
      <pubDate>Thu, 08 Jun 2023 22:28:38 +0000</pubDate>
      <link>https://dev.to/everythingdevops/understanding-docker-architecture-a-beginners-guide-to-how-docker-works-37a9</link>
      <guid>https://dev.to/everythingdevops/understanding-docker-architecture-a-beginners-guide-to-how-docker-works-37a9</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/understanding-docker-architecture-a-beginners-guide-to-how-docker-works/"&gt;Everything DevOps.&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using Docker, developers can package all their code and its dependencies into an isolated environment called an image and then run that image as a container. With Docker, deploying an application from a development server to a production server without worrying about compatibility issues is easy.&lt;/p&gt;

&lt;p&gt;Aside from knowing basic Docker commands, while learning Docker, it is necessary you understand how these commands work. Therefore, in this article, you will learn about the fundamental components that make up Docker and how they work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Docker container?
&lt;/h2&gt;

&lt;p&gt;Containers are lightweight, standalone units containing all the code and libraries needed to run an application. Unlike a virtual machine, a Docker container runs directly on the host operating system. This means it shares the host operating system kernel with other containers.&lt;/p&gt;

&lt;p&gt;Docker containers are designed to be moved around easily between different environments without changing the application or its dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Docker engine?
&lt;/h2&gt;

&lt;p&gt;The Docker engine is the core of the Docker platform. It manages containers, including their creation, running and shipping and the entire lifecycle. When you install Docker, the Docker engine gets installed as well. This engine is the primary client-server technology that manages containers using all Dockers services and components.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 3 fundamental components of the Docker engine
&lt;/h2&gt;

&lt;p&gt;The Docker engine consists of three (3) fundamental components, including the Docker daemon, Docker API and Docker client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker daemon&lt;/strong&gt;&lt;br&gt;
Docker daemon is a fundamental Docker component. It is a background process that listens to requests from the Docker client and manages the creation and running of Docker containers. The Docker daemon can be considered as the engine that powers the Docker environment. This allows developers to run, build and manage containerized applications.&lt;/p&gt;

&lt;p&gt;The Docker daemon pulls Docker images from registries and manages the resources needed to run Docker containers. Docker daemon functions include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image management&lt;/strong&gt;: The Docker daemon manages images, including pulling and caching images for fast and efficient container creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volume management&lt;/strong&gt;: Persisting data in containers is possible due to the Docker daemon. It enables the creation and management of volumes, which ensures data is saved when containers are deleted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network management&lt;/strong&gt;: The Docker daemon manages communication between containers and the outside world. It manages container network interfaces, ensuring they are isolated from each other and the host machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container management&lt;/strong&gt;: The Docker daemon manages the starting, stopping and deleting containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker API&lt;/strong&gt;&lt;br&gt;
The Docker API is a programmatic interface that communicates with the Docker daemon. With the Docker API, you can tell the Docker daemon to perform tasks like starting, stopping and deleting containers or downloading or uploading Docker images. Docker API makes networks and volumes possible and manages user permissions and access.&lt;/p&gt;

&lt;p&gt;All Docker daemon tasks are possible due to the Docker API. Without the Docker API, communicating with the Docker daemon programmatically wouldn't be possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker client&lt;/strong&gt;&lt;br&gt;
This is the primary mode of communicating with the Docker daemon. The Docker client is a command line interface (CLI) developers use to interact with the Docker daemon from their computers. When a user uses a command such as a &lt;code&gt;docker run&lt;/code&gt;, the Docker client sends this request to the Docker daemon. &lt;/p&gt;

&lt;p&gt;With the Docker client, developers can manage Docker containers, images, networks, and volumes from the command line. Below are the key features of the Docker client:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Command line interface&lt;/strong&gt;: The Docker client provides developers a command line interface to execute Docker commands.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration with development tools&lt;/strong&gt;: With the Docker client, it is possible to manage Docker containers from popular development environments, including Visual Studio Code and IntelliJ IDEA.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Although the Docker API and Docker client may seem similar, as they are tools you can use to interact with the Docker daemon, they differ in a few ways. The Docker client sends requests to the Docker daemon via a Unix socket or a network interface, while the Docker API exposes a RESTful HTTP interface over a network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker workflow
&lt;/h2&gt;

&lt;p&gt;The Docker client, Docker API and Docker daemon work together for a complete Docker workflow. A practical example would be creating a container to see all these parts in action.&lt;/p&gt;

&lt;p&gt;To create a Docker container, you would follow the steps below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;&lt;br&gt;
Using the Docker client, you can pull an image from a registry (such as &lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt;) or build an image from a &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker pull nginx:latest //This will pull the nginx image from Docker Hub
$ docker build -t &amp;lt;image_tag&amp;gt; . // When you run this in a directory with a Dockerfile, it builds and tags it.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;&lt;br&gt;
Once you have an image, create a container using the Docker client.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run --name mycontainer -d nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command requests the Docker API to create a container. The Docker API communicates with the Docker daemon to create the container. The Docker daemon sets up a network interface that allows the container to communicate with other containers or the host system. It also sets up volumes that the container can use to persist data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step&lt;/strong&gt; &lt;strong&gt;3&lt;/strong&gt;&lt;br&gt;
You can interact with the container once it's running using the Docker client. You can use the following commands to do this.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker exec -it mycontainer bash # Execute a command inside a running container
$ docker logs mycontainer # View the logs of a container
$ docker stop mycontainer # Stop a container
$ docker start mycontainer # Start a container   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step&lt;/strong&gt; &lt;strong&gt;4&lt;/strong&gt;&lt;br&gt;
You can also use the Docker client to stop or delete the container.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker stop mycontainer # Stop a container
$ docker rm mycontainer # Start a container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Docker client sends commands or requests to the Docker API, which communicates with the Docker daemon to create and manage containers and their resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternatives to Docker
&lt;/h2&gt;

&lt;p&gt;Although Docker isn't the first platform developers use for containerization, it played a significant role in popularizing containerization with a simplified process for creating containers and a user-friendly interface.&lt;/p&gt;

&lt;p&gt;However, there are alternatives to Docker, such as container runtimes such as &lt;strong&gt;containerd&lt;/strong&gt; and &lt;strong&gt;CRI-O&lt;/strong&gt; and tools for building images such as &lt;strong&gt;Buildah&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://containerd.io/"&gt;&lt;strong&gt;Containerd&lt;/strong&gt;&lt;/a&gt;: This is an open-source container runtime to manage a container's lifecycle. Docker and Kubernetes can use Containerd by providing a high-level API for managing containers and a low-level runtime for container orchestration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cri-o.io/"&gt;&lt;strong&gt;CRI-O&lt;/strong&gt;&lt;/a&gt;: This is an open-source container runtime designed for use with Kubernetes. It is a lightweight and stable environment for containers. It also complies with the Kubernetes Container Runtime Interface (CRI), making it easy to integrate with Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://buildah.io/"&gt;&lt;strong&gt;Buildah&lt;/strong&gt;&lt;/a&gt;: This lightweight, open-source command-line tool for building and managing container images. It is an efficient alternative to Docker. With Buildah, you can build images in various ways, including using a &lt;code&gt;Dockerfile&lt;/code&gt;, a &lt;code&gt;podmanfile&lt;/code&gt; or by running commands in a container. Buildah is a flexible, secure and powerful tool for building container images.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Containerization is rapidly becoming the new standard for application deployment, and Docker is leading the way in this area. With its robust and flexible architecture, Docker makes it easy to build, deploy, and manage containerized applications on any platform.&lt;/p&gt;

&lt;p&gt;If you're looking to improve your application deployment process or if you want to explore the exciting world of containerization, it is advised that you dive in and explore Docker. &lt;/p&gt;

&lt;p&gt;To learn more about Docker, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/"&gt;The Docker documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=9zUHg7xjIqQ&amp;amp;t=0s"&gt;Learn Docker - DevOps with Node.js &amp;amp; Express&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://everythingdevops.dev/automating-dependency-updates-for-docker-projects/"&gt;Automating Dependency Updates for Docker Projects&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://everythingdevops.dev/how-to-deploy-a-multi-container-docker-compose-application-on-amazon-ec2/"&gt;How to Deploy a Multi Container Docker Compose Application On Amazon EC2&lt;/a&gt;
&lt;a href="https://everythingdevops.dev/how-to-deploy-a-multi-container-docker-compose-application-on-amazon-ec2/"&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>How to Set Up a Linux OS (Ubuntu) on Windows using VirtualBox and Vagrant</title>
      <dc:creator>Angel Onyebuchi</dc:creator>
      <pubDate>Tue, 21 Mar 2023 00:54:33 +0000</pubDate>
      <link>https://dev.to/everythingdevops/how-to-set-up-a-linux-os-ubuntu-on-windows-using-virtualbox-and-vagrant-5996</link>
      <guid>https://dev.to/everythingdevops/how-to-set-up-a-linux-os-ubuntu-on-windows-using-virtualbox-and-vagrant-5996</guid>
      <description>&lt;p&gt;Ubuntu is a popular Linux distribution that offers users a wide range of features and applications. Ubuntu is a great choice for those new to Linux who want to explore its capabilities. However, it can be difficult to set up and configure on a Windows desktop. &lt;/p&gt;

&lt;p&gt;Fortunately, there is a way to get Ubuntu up and running quickly and easily with VirtualBox and Vagrant. This tutorial will guide you through setting up Ubuntu with VirtualBox and Vagrant on a Windows desktop, creating a secure and isolated virtual environment for testing and experimenting.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Vagrant?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.vagrantup.com/"&gt;Vagrant&lt;/a&gt; is an open-source software for building and managing virtual machine environments in a single workflow. With Vagrant, you can define a configuration file that specifies the details of the virtual machine you want to create and then use a single command to create and configure the virtual machine. This makes it easy to set up and maintain a consistent development environment across multiple machines and to share that environment with others.&lt;/p&gt;

&lt;p&gt;Vagrant works with various virtualization software, including VirtualBox, VMware, and Hyper-V, and can manage both Linux and Windows virtual machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is VirtualBox?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.virtualbox.org/"&gt;VirtualBox&lt;/a&gt; is a free, open-source virtualization software platform developed by Oracle that allows you to run multiple operating systems on a single physical machine. With VirtualBox, you can create and run virtual machines (VMs) on your computer, each of which runs a separate operating system and can be configured with its virtual hardware.&lt;/p&gt;

&lt;p&gt;VirtualBox is designed to be easy to use, with a simple graphical user interface (GUI) that allows you to create and manage virtual machines. It supports a wide range of operating systems, including Windows, Linux, macOS, and many others, and can be used for various purposes, such as testing software, running legacy applications, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  How would Vagrant and VirtualBox work?
&lt;/h2&gt;

&lt;p&gt;When you use Vagrant with VirtualBox, Vagrant creates and manages virtual machines in the VirtualBox environment. You define the operating system type, the amount of memory, and other resources the virtual machine should have in the Vagrant configuration file. Vagrant then uses this configuration to create and configure the virtual machine in VirtualBox.&lt;/p&gt;

&lt;p&gt;Once the virtual machine is up and running, you can use Vagrant to manage it by &lt;a href="https://www.ssh.com/academy/ssh"&gt;SSHing&lt;/a&gt; into it or running provisioning scripts to set up the environment. &lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisite
&lt;/h1&gt;

&lt;p&gt;To follow along with this article, you need to have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A computer with 

&lt;ul&gt;
&lt;li&gt;at least 8 GB RAM &lt;/li&gt;
&lt;li&gt;Windows 10 x64-bit operating system (OS) &lt;/li&gt;
&lt;li&gt;A modern multi-core Intel/AMD CPU&lt;/li&gt;
&lt;li&gt;Virtualization is enabled &lt;em&gt;in its BIOS&lt;/em&gt; settings. Find out how to &lt;a href="https://www.sony-asia.com/electronics/support/articles/S500016173"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Have a basic knowledge of  &lt;a href="https://apps.microsoft.com/store/detail/powershell/9MZ1SNWT0N5D?hl=en-ng&amp;amp;gl=ng"&gt;Powershell&lt;/a&gt; or &lt;a href="https://gitforwindows.org/"&gt;Gitbash&lt;/a&gt;.
&lt;h2&gt;
  
  
  Installing VirtualBox&lt;/h2&gt;
&lt;/li&gt;



&lt;/ul&gt;

&lt;p&gt;To install VirtualBox, you will need to follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Visit the VirtualBox website using this &lt;a href="https://www.virtualbox.org/"&gt;link&lt;/a&gt;, and you will see a page similar to the one below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--106-TaNO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668818921446_Virtualbox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--106-TaNO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668818921446_Virtualbox.png" alt="" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Click on “&lt;strong&gt;Download VirtualBox&lt;/strong&gt;” and then on Windows hosts to download VirtualBox and its extension pack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4r7JNK80--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668820341056_Box%2B2%2Bedited.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4r7JNK80--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668820341056_Box%2B2%2Bedited.png" alt="" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; After downloading it,  click on the “&lt;strong&gt;New&lt;/strong&gt;” icon in the top right-hand corner to create a new virtual machine. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MjhoYf8K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1673055823119_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MjhoYf8K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1673055823119_image.png" alt="" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You would see a prompt like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0MkewLba--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668821589916_Annotation%2B2022-09-15%2B020655.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0MkewLba--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668821589916_Annotation%2B2022-09-15%2B020655.png" alt="" width="412" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Name the new machine you want to create and choose the type and version that suits your taste. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--guokkYKC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668822156938_step%2B3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--guokkYKC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668822156938_step%2B3.png" alt="" width="410" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Click next and assign the memory size you want to allocate to the virtual machine using the up and down arrow keys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lR4cbneP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668822777512_step%2B4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lR4cbneP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668822777512_step%2B4.png" alt="" width="410" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Click Next, select “Create a virtual hard disk now”, and create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q_w1odGT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668896899834_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q_w1odGT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1668896899834_image.png" alt="" width="410" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Choose the hard disk file type (using the default setting is recommended except there are other preferences).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gaCj5pMw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1669029744634_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gaCj5pMw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1669029744634_image.png" alt="" width="428" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Next is to allocate storage space for your hard disk. Choose if you want a flexible or fixed space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mFD3_Mod--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1669030062425_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mFD3_Mod--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1669030062425_image.png" alt="" width="428" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt; Choose the file location and size and create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zeDvspHP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1669030385817_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zeDvspHP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1669030385817_image.png" alt="" width="428" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10&lt;/strong&gt;: Click &lt;strong&gt;C&lt;/strong&gt;&lt;strong&gt;reate&lt;/strong&gt;&lt;strong&gt;,&lt;/strong&gt; and you’ve successfully set up your virtual machine…&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Installing Vagrant&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To install Vagrant using a graphical user interface (GUI), you will need to download the Vagrant installer from the Vagrant website and then run it. Here are the steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Go to the Vagrant download page at &lt;a href="https://www.vagrantup.com/downloads.html"&gt;https://www.vagrantup.com/downloads.html&lt;/a&gt;, and under the "Operating System" heading, click on the appropriate “Binary” for your computer. The installer will be downloaded to your computer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y2WzO5Vu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_5DE95FD98FD0B780A74FF64FC1C345D98C98860B3B683916DA26C2627A3A521D_1673554631523_Screenshot%2B2023-01-12%2Bat%2B21.17.08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y2WzO5Vu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_5DE95FD98FD0B780A74FF64FC1C345D98C98860B3B683916DA26C2627A3A521D_1673554631523_Screenshot%2B2023-01-12%2Bat%2B21.17.08.png" alt="" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Locate the installer file on your computer and double-click it to start the installation process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sf-M_87b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1673056675610_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sf-M_87b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1673056675610_image.png" alt="" width="676" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Follow the prompts in the installer to complete the installation.&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; Once the installation is complete, you can start using Vagrant by opening a terminal or command prompt and typing &lt;code&gt;vagrant&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting up the machine
&lt;/h1&gt;

&lt;p&gt;After downloading Vagrant, to confirm that it was successfully installed, open the terminal/cmd of your choice and head to the home directory using the &lt;code&gt;$ c&lt;/code&gt;&lt;code&gt;d ~&lt;/code&gt; as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WsLbJNEr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664370079085_image%2B1%2B2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WsLbJNEr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664370079085_image%2B1%2B2.png" alt="" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: &lt;code&gt;~&lt;/code&gt; is used to move into the home directory in the command line.&lt;/p&gt;

&lt;p&gt;List out the files in that directory using the &lt;code&gt;ls&lt;/code&gt; command to check if Vagrant was successfully installed.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LWtJgKFT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1673057101857_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LWtJgKFT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1673057101857_image.png" alt="" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After confirming the installation, create a directory for the Ubuntu setup using the &lt;code&gt;mkdir&lt;/code&gt; command below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir Ubuntu_20.04
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gKXsKn39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664374222904_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gKXsKn39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664374222904_image.png" alt="" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Change into the directory that you created using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd Ubuntu_20.04
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qyRZVwri--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664386246017_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qyRZVwri--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664386246017_image.png" alt="" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;vagrant init ubuntu/focal64&lt;/code&gt; command. Running this command automatically places a Vagrantfile in the directory created above. A Vagrantfile is a file that instructs Vagrant to create new Vagrant machines or boxes. &lt;a href="https://app.vagrantup.com/ubuntu/boxes/focal64"&gt;&lt;/a&gt;&lt;a href="https://app.vagrantup.com/ubuntu/boxes/focal64"&gt;ubuntu/focal64&lt;/a&gt; is an existing Vagrant box for the Linux Ubuntu distribution.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vagrant init ubuntu/focal64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QbzHpFRR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664389358001_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QbzHpFRR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664389358001_image.png" alt="" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start the Virtual Machine using &lt;code&gt;vagrant up&lt;/code&gt; and watch it spin up the Virtual machine.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vagrant up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IFCohMDL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664389793248_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IFCohMDL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664389793248_image.png" alt="" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ensure your VirtualBox looks like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m3-Q9Qsy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664390986724_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m3-Q9Qsy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664390986724_image.png" alt="" width="723" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect to the VM using &lt;code&gt;vagrant ssh&lt;/code&gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vagrant ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v4mmSvT4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664391948468_image%2B7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v4mmSvT4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropbox.com/s_2DCFA656469FFD130FB3D773237FA24752F8702F0014DBA6ABDE2D130B9D5B78_1664391948468_image%2B7.png" alt="" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This tutorial has shown you how to set up your Linux OS (Ubuntu) on Windows using Vagrant and VirtualBox. Vagrant is a great tool and a very easy way of using Ubuntu on Windows rather than having to dual boot. To learn more about Vagrant and VirtualBox, Check out these resources : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=krDU3BtJNpk"&gt;Vagrant and VirtualBox Simplified&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.taniarascia.com/what-are-vagrant-and-virtualbox-and-how-do-i-use-them/"&gt;Vagrant and VirtualBox Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.learncodeonline.in/automating-linux-installation-using-vagrant-and-virtualbox"&gt;Automating Linux Installation using Vagrant and VirtualBox&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>linux</category>
    </item>
    <item>
      <title>A Brief History of DevOps and Its Impact on Software Development</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Mon, 20 Mar 2023 19:36:06 +0000</pubDate>
      <link>https://dev.to/everythingdevops/a-brief-history-of-devops-and-its-impact-on-software-development-de4</link>
      <guid>https://dev.to/everythingdevops/a-brief-history-of-devops-and-its-impact-on-software-development-de4</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/a-brief-history-of-devops-and-its-impact-on-software-development/"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Software development has been and continues to be one of our society's most important building blocks. Software development has gifted us with the mobile phones we use to stay connected, the rockets we send to space and a host of other great innovations. &lt;/p&gt;

&lt;p&gt;As complex as these innovations become, the more complex and time-demanding the software that drives them becomes. In an attempt to make the entire software development process as efficient as possible, different approaches have been introduced. An example of one of these is DevOps. &lt;/p&gt;

&lt;p&gt;This article will share a brief history of DevOps, from its early days and its impact on software development today. At the end of this article, you will learn what to expect next in the field of DevOps. &lt;/p&gt;

&lt;h2&gt;
  
  
  Before DevOps: The early days of software development
&lt;/h2&gt;

&lt;p&gt;In the early days of computing, software was developed and maintained by a small group of specialists, often working in isolation from the rest of an organization. The development process known as “Waterfall” was slow, with teams spending months or even years building and testing software before it was ready to be deployed. &lt;/p&gt;

&lt;p&gt;The Waterfall model breaks project activities into pre-defined phases, passing the project down from one phase to another. In the waterfall model, each phase depends on the deliverables of the previous one; as a result, each phase must be completed before the next phase can begin with no overlap. &lt;/p&gt;

&lt;p&gt;The Waterfall model is typically divided into the following distinct stages:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h-QvzaHQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_49C834E1111AFDD79E92B6B822DF812C7E8237A52B88602FDCD7C45B1744DAA6_1677154121591_Screenshot%2B2023-02-23%2Bat%2B13.08.37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h-QvzaHQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_49C834E1111AFDD79E92B6B822DF812C7E8237A52B88602FDCD7C45B1744DAA6_1677154121591_Screenshot%2B2023-02-23%2Bat%2B13.08.37.png" alt="" width="647" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source: &lt;a href="https://business.adobe.com/blog/basics/waterfall"&gt;Adobe&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Requirements gathering and analysis&lt;/strong&gt;: This phase involves gathering and documenting the requirements for the project. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design&lt;/strong&gt;: In this phase, the system architecture and design are developed based on the requirements gathered. The system design helps specify the hardware and software requirements, like what programming language would be the best, while defining the overall system architecture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implementation&lt;/strong&gt;: This phase involves the actual coding and development of the system or software. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing&lt;/strong&gt;: The system is tested in this phase to ensure it meets the requirements and is error-free.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment&lt;/strong&gt;: The system is deployed and available to the end-users in this phase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance&lt;/strong&gt;: This phase involves ongoing maintenance, bug fixes, and updates to the system. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenges&lt;/strong&gt; &lt;strong&gt;with the Waterfall model&lt;/strong&gt;&lt;br&gt;
Although this Waterfall model has pros, like its ease of use and management, it is inefficient and has high risk and uncertainty. Some of the many disadvantages of the Waterfall model are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lack of flexibility&lt;/li&gt;
&lt;li&gt;No room for errors&lt;/li&gt;
&lt;li&gt;Difficulty in dealing with changes&lt;/li&gt;
&lt;li&gt;No customer involvement
## The rise of the Agile methodology&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Recognizing the disadvantages of the Waterfall approach in building software, in the late 1990s, a new approach to software development, known as Agile, emerged to improve the software creation process.&lt;/p&gt;

&lt;p&gt;Agile combats the rigidity of the Waterfall model by focusing on the clean delivery of individual pieces or parts of the software and not the entire application. &lt;/p&gt;

&lt;p&gt;Launched formally in 2001, Agile looked to get new features into the hands of the users quickly by focusing on rapid iteration and delivery brought by emphasizing collaboration, flexibility and continuous improvement. &lt;/p&gt;

&lt;p&gt;Teams were encouraged to work in short, iterative cycles, known as “Sprints”, and to continuously refine and improve the software they were building. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bmVkW8uS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_49C834E1111AFDD79E92B6B822DF812C7E8237A52B88602FDCD7C45B1744DAA6_1677154238354_Screenshot%2B2023-02-23%2Bat%2B13.10.34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bmVkW8uS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_49C834E1111AFDD79E92B6B822DF812C7E8237A52B88602FDCD7C45B1744DAA6_1677154238354_Screenshot%2B2023-02-23%2Bat%2B13.10.34.png" alt="" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source: &lt;a href="https://kruschecompany.com/agile-software-development/"&gt;Krusche &amp;amp; Company GmbH&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To practice agile effectively, its creators outlined the following guidelines in the &lt;a href="https://agilemanifesto.org/"&gt;Agile Manifesto&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Individuals and interactions&lt;/strong&gt; over processes and tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Working software&lt;/strong&gt; over comprehensive documentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer collaboration&lt;/strong&gt; over contract negotiation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responding to change&lt;/strong&gt; over following a plan&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While there is value in the items on the right, we should value the items on the left more.&lt;/p&gt;

&lt;p&gt;Agile significantly improves how we create software and offers advantages like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;faster time-to-market, &lt;/li&gt;
&lt;li&gt;testing and superior quality, &lt;/li&gt;
&lt;li&gt;flexible priorities, &lt;/li&gt;
&lt;li&gt;risk reduction, &lt;/li&gt;
&lt;li&gt;project visibility and transparency, etc., &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It soon began to replace the need for the Waterfall model in the software development life cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The History of DevOps
&lt;/h2&gt;

&lt;p&gt;As Agile development gained popularity, a new term emerged to describe the practices and principles used to streamline the development and deployment process: &lt;strong&gt;DevOps&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To share the history of DevOps, below is a detailed history of DevOps timeline, listing all the defining moments and summarizing the contributions of key influential people from 2007 to 2019. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k-kPs7uF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_49C834E1111AFDD79E92B6B822DF812C7E8237A52B88602FDCD7C45B1744DAA6_1677363073897_Beige%2BMinimal%2BBusiness%2BTimeline%2BDiagram%2BGraph%2B1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k-kPs7uF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_49C834E1111AFDD79E92B6B822DF812C7E8237A52B88602FDCD7C45B1744DAA6_1677363073897_Beige%2BMinimal%2BBusiness%2BTimeline%2BDiagram%2BGraph%2B1.png" alt="" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2007&lt;/strong&gt; &lt;br&gt;
DevOps started in 2007 when &lt;a href="https://www.jedi.be/"&gt;Patrick Debois&lt;/a&gt; — an IT consultant, recognized that development (&lt;strong&gt;Dev&lt;/strong&gt;) and operations (&lt;strong&gt;Ops&lt;/strong&gt;) teams were not working well together. While the gaps and conflicts between Dev and Ops have always been unsettling to him, the constant switching and back and forth on a large data center migration project where he was responsible for testing particularly frustrated him.&lt;/p&gt;

&lt;p&gt;One day he was deep in the rhythm of Agile development. Next day he was firefighting and living the unpredictability of traditional operations. He knew there had to be a better way. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2008&lt;/strong&gt; &lt;br&gt;
The following year, at the 2008 Agile Conference, Andrew Shafer created a birds of a feather meeting (BoF) to discuss “Agile Infrastructure”. Andrew didn't think anybody would come, so he himself didn’t show up to his own meeting. Patrick Debois showed up, and went looking for Andrew because he wanted to talk about Agile infrastructure being the solution to get operations to be as Agile like the developers were. This was where DevOps got started.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2009&lt;/strong&gt;&lt;br&gt;
In 2009, at the Velocity conference, John Allspaw and Paul Hammond talked about “&lt;a href="https://www.youtube.com/watch?v=LdOe18KhtT4"&gt;10+ deploys per day - Dev and Ops Cooperation at Flickr&lt;/a&gt;,” and the idea started gaining traction. This talk made people notice what was possible by adopting these early DevOps practices. Also, in October 2009, Patrick, held the first &lt;a href="https://devopsdays.org/"&gt;DevOpsDays&lt;/a&gt; conference in Ghent, Belgium. It was described as “The conference that brings development and operations together.” This is where the term "DevOps" was first used. DevOpsDays is now a local conference held internationally several times a year in different cities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2010&lt;/strong&gt;&lt;br&gt;
In 2010, Jez Humble and David Farley wrote a groundbreaking book called &lt;a href="https://continuousdelivery.com/"&gt;Continuous Delivery&lt;/a&gt; that sets out the principles and technical practices that enable rapid, incremental delivery of high-quality, valuable new functionality to users using a technique called Continuous Delivery. &lt;/p&gt;

&lt;p&gt;Through automation of the build, deploy, and test processes, along with improved collaboration between developers, testers, and operations, delivery teams can release changes in a matter of hours—sometimes even minutes—no matter the size of a project or the complexity. The book is over 13 years old, but it still has a lot of great concepts that helped changed a lot of people's thinking about how to perform software delivery in a continuous fashion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2013&lt;/strong&gt;&lt;br&gt;
Two years later, in 2013, Gene Kim, Kevin Behr, and George Spafford published &lt;a href="https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592"&gt;The Phoenix Project&lt;/a&gt;, based on Eliyahu Goldratt’s book, &lt;a href="https://en.wikipedia.org/wiki/The_Goal_(novel)"&gt;The Goal&lt;/a&gt;. The Goal is about a manufacturing plant about to go under and what they had to do to bring it back to life. It is a story about lean manufacturing principles. The Phoenix Project is about an information technology (IT) shop in a company about to go under and what it took to bring it back to life. This story is about applying lean manufacturing principles to software development and delivery. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2015&lt;/strong&gt;&lt;br&gt;
In 2015, Dr. Nicole Forsgren, Gene Kim, and Jez Humble founded a startup called DORA (DevOps Research and Assessment) that produced what are now the largest DevOps studies to date called the &lt;a href="https://www.devops-research.com/research.html#reports"&gt;State of DevOps Reports&lt;/a&gt;. Nicole was the CEO and is an incredible statistician. &lt;/p&gt;

&lt;p&gt;Through this research, she found that taking an experimental approach to product development can improve your IT and organizational performance and that high-performing organizations are decisively outperforming their lower-performing peers in terms of throughput. The research shows that undertaking a technology transformation initiative can produce sizeable cost savings in any organization. If you haven't read the most recent State of DevOps report, I strongly urge you to do so.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2016&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002"&gt;The DevOps Handbook&lt;/a&gt; was published in 2016. It was written by Gene Kim, Jez Humble, Patrick Debois, and John Willis as a follow-on to The Phoenix Project and serves as a practical guide on implementing the concepts introduced in that book. John Willis, by the way, worked at &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; and &lt;a href="https://www.chef.io/"&gt;Chef&lt;/a&gt; back then, and is a DevOpsDays coordinator after being at the original DevOpsDays in Ghent 2009 with Patrick Debois. If you only read one DevOps book, this is the book to read. They looked at companies that have adopted DevOps and document what did work and what did not work. It's a great read.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2019 — 10 years of DevOpsDays&lt;/strong&gt;&lt;br&gt;
Come 2019, 10 years after the first DevOpsDays in Ghent, Belgium, 60+ DevOpsDays events were held in 21 countries. &lt;/p&gt;

&lt;p&gt;Patrick Debois led DevOpsDays from its inception in 2009 until 2014, and then Bridget Kromhout became the lead in 2015. She is also the co-host on the very popular podcast, &lt;a href="https://www.arresteddevops.com/"&gt;Arrested DevOps&lt;/a&gt;. If you don't listen to it, you should. She stepped down in 2020 but stayed on the advisory board of DevOpsDays with Patrick. &lt;/p&gt;

&lt;p&gt;The individuals mentioned above are some of the major influential people in the early DevOps movement. They weren’t the only ones, but they went out and made a difference. They showed us how DevOps can be impactful. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why is the DevOps History Important?
&lt;/h2&gt;

&lt;p&gt;Knowing the DevOps History is important because DevOps was a grassroots effort started by people like Patrick Debois and Andrew Shafer, who just wanted to eliminate the roadblocks in software delivery and figure out how can development and operations work better together.&lt;/p&gt;

&lt;p&gt;And as Damon Edwards, who co-hosted a podcast with John Willis called the 'DevOps Cafe', said it best in his talk on “&lt;a href="https://www.youtube.com/watch?v=o7-IuYS0iSE"&gt;The (Short) History of DevOps&lt;/a&gt;”:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DevOps is from the practitioners, by practitioners. &lt;/li&gt;
&lt;li&gt;It’s not a product, a specification, or job title. &lt;/li&gt;
&lt;li&gt;It is an experience-based movement that is decentralized and open to all.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DevOps Impact on Software Development
&lt;/h2&gt;

&lt;p&gt;The impact of DevOps on modern software development has been significant. The following are some of the key ways that DevOps is changing the way software is developed and deployed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Faster Time-to-Market&lt;/strong&gt;: DevOps enables teams to deliver software quickly and efficiently, which reduces the time it takes to bring new features or products to market.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Collaboration&lt;/strong&gt;: DevOps encourages collaboration between developers, testers, and IT operations teams, which leads to better communication and a faster resolution of issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased Quality&lt;/strong&gt;: By automating testing and deployment processes, DevOps reduces the chance of errors and increases the overall software quality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Greater Efficiency&lt;/strong&gt;: DevOps automates repetitive tasks and streamlines processes, freeing time and resources for more important tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better Customer Satisfaction&lt;/strong&gt;: DevOps allows teams to respond quickly to customer feedback and make necessary changes, resulting in better customer satisfaction.
## What next in DevOps?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;DevOps is a constantly evolving field, and several trends and areas of focus are likely to shape its future. Here are a few things to keep an eye on in the world of DevOps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;DevSecOps&lt;/strong&gt;: The integration of security into DevOps practices, also known as DevSecOps, is becoming increasingly important. As software security threats continue to grow, organizations realise the need to prioritize security throughout the entire software development lifecycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artificial Intelligence and Machine Learning&lt;/strong&gt;: The use of AI and machine learning in DevOps is on the rise, with the potential to automate more tasks and improve the efficiency of software development and deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless Computing&lt;/strong&gt;: Serverless computing is a growing trend in DevOps, allowing for code deployment without needing to manage infrastructure. This can help reduce costs and improve scalability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Site Reliability Engineering&lt;/strong&gt;: Site Reliability Engineering (SRE) is a discipline that focuses on the reliability and scalability of software systems. It is becoming increasingly important in DevOps as organizations seek to improve the reliability and availability of their applications.
## References&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The following are the references used in creating this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.coursera.org/lecture/intro-to-devops/brief-history-of-devops-vBgDl"&gt;Brief History of DevOps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=o7-IuYS0iSE"&gt;The (Short) History of DevOps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://devopsdays.org/about"&gt;DevOpsDays&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>cloud</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>How to restart Kubernetes Pods with kubectl</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Thu, 16 Mar 2023 01:26:46 +0000</pubDate>
      <link>https://dev.to/everythingdevops/how-to-restart-kubernetes-pods-with-kubectl-4g5h</link>
      <guid>https://dev.to/everythingdevops/how-to-restart-kubernetes-pods-with-kubectl-4g5h</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/how-to-restart-kubernetes-pods-with-kubectl/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Anyone who has used Kubernetes for an extended period of time will know that things don’t always go as smoothly as you’d like. In production, unexpected things happen, and Pods can crash or fail in some unforeseen way. When this happens, you need a reliable way to restart the Pods. &lt;/p&gt;

&lt;p&gt;Restarting a pod is not the same as restarting a container, as a Pod is not a process but an environment for running container(s). A Pod persists until it finishes execution, is deleted, is &lt;em&gt;evicted&lt;/em&gt; for lack of resources, or its host node fails.&lt;/p&gt;

&lt;p&gt;This article will list 4 scenarios where you might want to restart a Kubernetes Pod and walk you through methods to restart Pods with kubectl.&lt;/p&gt;

&lt;h1&gt;
  
  
  4 scenarios where you might want to restart a Pod
&lt;/h1&gt;

&lt;p&gt;There are several scenarios where you neeed to restart a Pod. The following are 4 of them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Unexpected errors such as “&lt;strong&gt;Pods stuck in an inactive state&lt;/strong&gt;” (e.g., pending),  &lt;strong&gt;“Out of Memory&lt;/strong&gt;” (occurs Pods try to go beyond the memory limits set in your manifest file), etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To easily upgrade a Pod with a newly-pushed container image if you previously set the PodSpec &lt;code&gt;imagePullPolicy&lt;/code&gt; to &lt;code&gt;Always&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To update configurations and secrets. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You would want to restart Pods when the application running in the Pod has a corrupted internal state that needs to be cleared.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you’ve seen some scenarios where you might want to restart a Pod. Next, you will learn how to restart Pods with kubectl.&lt;/p&gt;

&lt;h1&gt;
  
  
  Restarting Kubernetes pods with kubectl
&lt;/h1&gt;

&lt;p&gt;kubectl, by design, doesn’t have a direct command for restarting Pods. Because of this, to restart Pods with kubectl, you have to use one of the following methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restarting Kubernetes Pods by changing the number of replicas with &lt;code&gt;kubectl scale&lt;/code&gt; command&lt;/li&gt;
&lt;li&gt;Downtimeless restarts with &lt;code&gt;kubectl rollout restart&lt;/code&gt; command&lt;/li&gt;
&lt;li&gt;Automatic restarts by updating the Pod’s environment variable&lt;/li&gt;
&lt;li&gt;Restarting Pods by deleting them
## Prerequisites&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before you learn how to use each of the above methods, ensure you have the following prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster. The demo in this article was done using &lt;a href="https://minikube.sigs.k8s.io/docs/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt; — a single Node Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;The kubectl command-line tool configured to communicate with the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For demo purposes, in any desired directory, create a &lt;code&gt;httpd-deployment.yaml&lt;/code&gt; file with &lt;code&gt;replicas&lt;/code&gt; set to &lt;code&gt;2&lt;/code&gt; using the following YAML configurations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd-pod&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your terminal, change to the directory where you saved the deployment file, and run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f httpd-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will create the httpd deployment with two pods. To verify the number of Pods, run the &lt;code&gt;$ kubectl get pods&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6mci31igxffhj5rd4le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6mci31igxffhj5rd4le.png" alt="Creating and verifying an httpd deployment with kubectl" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have the Pods of the httpd deployment running. Next, you will use each of the methods outlined earlier to restart the Pods. &lt;/p&gt;

&lt;h2&gt;
  
  
  Restarting Kubernetes Pods by changing the number of replicas
&lt;/h2&gt;

&lt;p&gt;In this method of restarting Kubernetes Pods, you scale the number of the deployment replicas down to zero, which stops and terminates all the Pods. Then you scale them back up to the desired state, which initializes new pods. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: It is important to note that when you set the number of replicas to zero, seeing the Pods stop running, there will be some application downtime. &lt;/p&gt;

&lt;p&gt;To scale down the httpd deployment replicas you created, run the following &lt;code&gt;kubectl scale&lt;/code&gt; command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl scale deployment httpd-deployment --replicas=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will show an output indicating that Pods have been scaled, as shown in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h9tpqt1fxcsh1gylttv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h9tpqt1fxcsh1gylttv.png" alt="Scaling Pods down" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To confirm that the pods were stopped and terminated, run &lt;code&gt;$ kubectl get pods&lt;/code&gt;, and you should get the “&lt;strong&gt;No resources are found in default namespace”&lt;/strong&gt; message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkujkhrbpuodmwbw6c1ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkujkhrbpuodmwbw6c1ab.png" alt="Showing Pods" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To scale up the replicas, run the same &lt;code&gt;kubectl scale&lt;/code&gt;, but this time with &lt;code&gt;--replicas=2&lt;/code&gt;. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl scale deployment httpd-deployment --replicas=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, to verify the number of pods running, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And you should see each Pod back up and running after restarting, as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff595jefd8f9sln27g7ci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff595jefd8f9sln27g7ci.png" alt="Scaling Pods up" width="800" height="98"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Downtimeless restarts with Rollout restart
&lt;/h2&gt;

&lt;p&gt;In the previous method, you scaled down the number of replicas to zero to restart the Pods; doing so caused an outage and downtime of the application. To restart without any outage and downtime, use the &lt;code&gt;kubectl rollout restart&lt;/code&gt; command, which restarts the Pods one by one without impacting the deployment.&lt;/p&gt;

&lt;p&gt;To use &lt;code&gt;rollout restart&lt;/code&gt; on your httpd deployment, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl rollout restart deployment httpd-deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now to view the Pods restarting, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice in the image below Kubernetes creates a new Pod before &lt;code&gt;Terminating&lt;/code&gt; each of the previous ones as soon as the new Pod gets to &lt;code&gt;Running&lt;/code&gt; status. Because of this approach, there is no downtime in this restart method. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95lb0e11j3f137f2obzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95lb0e11j3f137f2obzb.png" alt="Using kubectl rollout restart" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatic restarts by updating the Pod’s environment variable
&lt;/h2&gt;

&lt;p&gt;So far, you’ve learned two ways of restarting Pods in Kubernetes; one by changing the replicas and the other by rollout restart. The methods work, but you explicitly restarted the pods with both of them. &lt;/p&gt;

&lt;p&gt;In this method, once you update the Pod’s &lt;a href="https://kubebyexample.com/en/concept/environment-variables" rel="noopener noreferrer"&gt;environment variable&lt;/a&gt;, the change will automatically restart the Pods.&lt;/p&gt;

&lt;p&gt;To update the environment variables of the Pods in your httpd deployment, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl set env deployment httpd-deployment DATE=$()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, which adds a &lt;code&gt;DATE&lt;/code&gt; environment variable in the Pods with a null value (&lt;code&gt;=$()&lt;/code&gt;), run &lt;code&gt;$ kubectl get pods&lt;/code&gt; and see the Pods restarting, similar to the rollout restart method. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7oxvt6h8emchs0ik3c8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7oxvt6h8emchs0ik3c8.png" alt="Adding an environment variable to Pods" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can verify that each Pod’s &lt;code&gt;DATE&lt;/code&gt; environment variable is null with the &lt;code&gt;kubectl describe&lt;/code&gt; command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe pod &amp;lt;pod_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, the &lt;code&gt;DATE&lt;/code&gt; variable is empty (null) like in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r0f0r1t9ftg8qtsc70v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r0f0r1t9ftg8qtsc70v.png" alt="Verifying environment variable addition" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Restarting Pods by deleting them
&lt;/h2&gt;

&lt;p&gt;Because the Kubernetes API is declarative, it automatically creates a replacement when you delete a Pod that’s part of a ReplicaSet or Deployment. The ReplicaSet will notice the Pod is no longer available as the number of container instances will drop below the target replica count.&lt;/p&gt;

&lt;p&gt;To delete a Pod, use the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete pod &amp;lt;pod_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Though this method works quickly, it is not recommended except if you have a failed or misbehaving Pod or set of Pods. For regular restarts like updating configurations, it is better to use the &lt;code&gt;kubectl scale&lt;/code&gt; or &lt;code&gt;kubectl rollout&lt;/code&gt; commands designed for that use case.&lt;/p&gt;

&lt;p&gt;To delete all failed Pods for this restart technique, use this command:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete pods --field-selector=status.phase=Failed&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Cleaning up&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Clean up the entire setup by deleting the deployment with the command below:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete deployment httpd-deployment&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;This article discussed 5 scenarios where you might want to restart Kubernetes Pods and walked you through 4 methods with kubectl. There is more to learn about kubectl. To learn more, check out the &lt;a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="noopener noreferrer"&gt;kubectl commands reference&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>cloudnative</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Set Environment Variables on a Linux Machine</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Wed, 14 Sep 2022 15:41:38 +0000</pubDate>
      <link>https://dev.to/everythingdevops/how-to-set-environment-variables-on-a-linux-machine-1ojc</link>
      <guid>https://dev.to/everythingdevops/how-to-set-environment-variables-on-a-linux-machine-1ojc</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/how-to-set-environment-variables-on-a-linux-machine/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When building software, you start in a development environment (your local computer). You then move to another environment(s) (Staging, QA, etc.), and finally, the production environment where users can use the application. &lt;/p&gt;

&lt;p&gt;While moving through each of these environments, there may be some configuration options that will be different. For example, in development, you may want to test &lt;a href="https://en.wikipedia.org/wiki/Create,_read,_update_and_delete" rel="noopener noreferrer"&gt;CRUD&lt;/a&gt; operations with a dummy database with varying configuration values to the live database with real user data. &lt;/p&gt;

&lt;p&gt;To ensure a seamless workflow and not have to regularly change the database configurations in code when moving to different environments, you can set environment variables for each. &lt;/p&gt;

&lt;p&gt;In this tutorial, you will learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What environment variables are, and&lt;/li&gt;
&lt;li&gt;How to set environment variables on a Linux machine. &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisite
&lt;/h1&gt;

&lt;p&gt;To follow along in this tutorial, you must have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of the terminal.&lt;/li&gt;
&lt;li&gt;Access to a Linux machine — This article uses &lt;a href="https://ubuntu.com/blog/ubuntu-22-04-lts-released" rel="noopener noreferrer"&gt;Ubuntu 22.04 (LTS) x64&lt;/a&gt; distribution.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What are environment variables?
&lt;/h1&gt;

&lt;p&gt;Environment variables are variables whose values are set outside the code of an application. They are typically set through the built-in functionality of an operating system. Environment variables are made up of name and value pairs, and you can create as many as you wish to be available for reference at a point in time.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting environment variables on a Linux machine
&lt;/h1&gt;

&lt;p&gt;To set environment variables on a Linux machine, normally, in the shell session of your terminal, you would run the &lt;code&gt;export&lt;/code&gt; command on each environment variable’s name and value like the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ENVIRONMENT_VARIABLE_NAME = &amp;lt;value&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But doing so, if and when the particular shell session ends, all the environment variables will be lost. All the environment variables will be lost because the &lt;code&gt;export&lt;/code&gt; command exports variables to the shell session’s environment, not the Linux machine environment.  &lt;/p&gt;

&lt;p&gt;To persist environment variables on a Linux machine, in any directory aside from your application’s directory, create an environment file with the vi editor using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will create and open a &lt;code&gt;.env&lt;/code&gt; file, then to edit the file using the vi editor, press &lt;code&gt;i&lt;/code&gt; and add your environment variables like in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603799811_Screenshot%2B2022-08-15%2Bat%2B23.49.56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603799811_Screenshot%2B2022-08-15%2Bat%2B23.49.56.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding your environment variables, to save the file, press &lt;code&gt;esc&lt;/code&gt;, then type &lt;code&gt;:wq&lt;/code&gt; and press &lt;code&gt;enter&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603956339_annotely_image%2B48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603956339_annotely_image%2B48.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After saving the file, in the root directory of your Linux machine, run &lt;code&gt;$ ls -la&lt;/code&gt; to view all files, including hidden ones, which should show you a &lt;code&gt;.profile&lt;/code&gt; as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660602926799_annotely_image%2B45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660602926799_annotely_image%2B45.png" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open up the profile file with &lt;code&gt;$ vi .profile&lt;/code&gt;,  press &lt;code&gt;i&lt;/code&gt; to edit the file, and at the end of the file, add the following configuration:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set -o allexport; source /&amp;lt;path_to_the_directory_of_.env_file&amp;gt;/.env; set +o allexport
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above configuration will loop through all the environment variables you added to the &lt;code&gt;.env&lt;/code&gt; file and set them on the Linux machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603418687_annotely_image%2B47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660603418687_annotely_image%2B47.png" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To save the configuration, press &lt;code&gt;esc&lt;/code&gt;, then type &lt;code&gt;:wq&lt;/code&gt; and press &lt;code&gt;enter&lt;/code&gt; as you did previously.&lt;/p&gt;

&lt;p&gt;To confirm that the configuration took effect and your environment variables have been set, log out of your current shell session, log back in and then run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ printenv 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, you should see your environment variables, as shown in the image below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660607459982_annotely_image%2B50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_85BD5A817F94A5B2D9B888E1F8B318AAFBA3F079552683D255DD8AB9C1448DA1_1660607459982_annotely_image%2B50.png" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This tutorial explained environment variables and taught how to set them on a Linux machine. There is more to learn about environment variables in Linux. To learn more, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.geeksforgeeks.org/environment-variables-in-linux-unix/" rel="noopener noreferrer"&gt;Environment Variables in Linux/Unix&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://linuxize.com/post/how-to-set-and-list-environment-variables-in-linux/" rel="noopener noreferrer"&gt;How to Set and List Environment Variables in Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.guru99.com/linux-environment-variables.html" rel="noopener noreferrer"&gt;List of Environment Variables in Linux/Unix&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to avoid merge commits when syncing a fork</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Mon, 22 Aug 2022 01:19:00 +0000</pubDate>
      <link>https://dev.to/everythingdevops/how-to-avoid-merge-commits-when-syncing-a-fork-3b6f</link>
      <guid>https://dev.to/everythingdevops/how-to-avoid-merge-commits-when-syncing-a-fork-3b6f</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/how-to-avoid-merge-commits-when-syncing-a-fork/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Whenever you work on open source projects, you usually maintain your copy (a &lt;a href="https://docs.github.com/en/get-started/quickstart/fork-a-repo" rel="noopener noreferrer"&gt;fork&lt;/a&gt;) of the original codebase. To propose changes, you open up a &lt;a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests" rel="noopener noreferrer"&gt;Pull Request (PR)&lt;/a&gt;. After you create a PR, there are chances that during its review process, commits will be made to the original codebase, which will require you to sync your fork. &lt;/p&gt;

&lt;p&gt;To sync your fork with the original codebase, ideally, you would use the web UI provided by your Git hosting service or run a &lt;code&gt;git fetch&lt;/code&gt; and &lt;code&gt;git merge&lt;/code&gt; in your terminal, as indicated in this &lt;a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork" rel="noopener noreferrer"&gt;Github tutorial&lt;/a&gt;. But with a PR open, syncing your fork that way will introduce an unwanted merge commit to your PR. &lt;/p&gt;

&lt;p&gt;In this article, you will learn what merge commits are and how to avoid them with &lt;code&gt;git rebase&lt;/code&gt; when syncing a fork with an original codebase. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is a merge commit?
&lt;/h2&gt;

&lt;p&gt;A merge commit is just like any other commit, it is the state of a repository at a point in time plus the history it evolved from. But there is one thing unique about a merge commit: it has at least two parent commits. &lt;/p&gt;

&lt;p&gt;When you create a merge commit, Git automatically merges the histories of two separate commits. This merge commit can cause conflicts and mess up a project’s Git history if present in a merged PR.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_4038E27EBB83FC06153F31425FE835219A7D0419F41A63A1B27836B12EF94A13_1659630585735_annotely_image%2B42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropbox.com%2Fs_4038E27EBB83FC06153F31425FE835219A7D0419F41A63A1B27836B12EF94A13_1659630585735_annotely_image%2B42.png" width="800" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The annotated section in the image above shows a merge commit of two parent commits. Here is the &lt;a href="https://github.com/cilium/cilium/commit/fcc7837499b630df4e576396fd28d44007f77db1" rel="noopener noreferrer"&gt;link to the merge commit&lt;/a&gt; to the image.&lt;/p&gt;

&lt;p&gt;Now you know what a merge commit is. Next, you will learn how to avoid it when syncing your fork with an original codebase.  &lt;/p&gt;

&lt;h2&gt;
  
  
  How to avoid merge commits when syncing a fork in Git.
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
To avoid merge commits, you need to &lt;a href="http://git-scm.com/book/en/Git-Branching-Rebasing" rel="noopener noreferrer"&gt;rebase&lt;/a&gt; the changes from the original remote codebase in your local fork before pushing them to your remote fork by following the steps below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;br&gt;
Create a link with the original remote repository to track and get the changes from the codebase with the command below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote add upstream https://github.com/com/original/original.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the above command, you will now have two remotes. One for your fork and one for the original codebase. If you run &lt;code&gt;$ git remote -v&lt;/code&gt;, you will see the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;origin https://github.com/your_username/your_fork.git (fetch)
origin https://github.com/your_username/your_fork.git (push)
upstream https://github.com/original/original.git (fetch)
upstream https://github.com/original/original.git (push)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above, &lt;code&gt;upstream&lt;/code&gt; refers to the original repository from which you created the fork.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;br&gt;
In this step, you fetch all the branches of the remote upstream with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git fetch upstream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;br&gt;
Next, you rewrite your fork’s master with the upstream’s master using &lt;code&gt;git rebase&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rebase upstream/master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt;&lt;br&gt;
Then finally, push the updates to your remote fork. You may need to force the push with “&lt;code&gt;--force&lt;/code&gt;.” &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git push origin master --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;br&gt;
You can skip the &lt;code&gt;git fetch&lt;/code&gt; step by using &lt;code&gt;git pull&lt;/code&gt; which is &lt;code&gt;git fetch&lt;/code&gt; + &lt;code&gt;git merge&lt;/code&gt; but with the &lt;code&gt;--&lt;/code&gt;&lt;code&gt;rebase&lt;/code&gt;  flag to override the &lt;code&gt;git merge&lt;/code&gt;. The pull command will be:&lt;/p&gt;



&lt;br&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git pull --rebase upstream master&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, you learned about merge commits and how you can avoid them when syncing your fork in Git using &lt;code&gt;git rebase&lt;/code&gt;. There is a lot more to learn about &lt;code&gt;git rebase&lt;/code&gt;. To learn more, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/docs/git-rebase" rel="noopener noreferrer"&gt;Git rebase official documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.simplilearn.com/what-is-git-rebase-command-article" rel="noopener noreferrer"&gt;What is Git Rebase, and How Do You Use It?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.baeldung.com/git-merge-vs-rebase" rel="noopener noreferrer"&gt;Difference Between git merge and rebase&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building x86 Images on an Apple M1 Chip</title>
      <dc:creator>Divine Odazie</dc:creator>
      <pubDate>Tue, 16 Aug 2022 15:00:00 +0000</pubDate>
      <link>https://dev.to/everythingdevops/building-x86-images-on-an-apple-m1-chip-3eac</link>
      <guid>https://dev.to/everythingdevops/building-x86-images-on-an-apple-m1-chip-3eac</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://everythingdevops.dev/building-x86-images-on-an-apple-m1-chip/" rel="noopener noreferrer"&gt;Everything DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A few months ago, while deploying an application in Amazon Elastic Kubernetes Service (EKS), my pods crashed with a  &lt;code&gt;standard_init_linux.go:228: exec user process caused: exec format error&lt;/code&gt; error.&lt;/p&gt;

&lt;p&gt;After a bit of research, I found out that the error tends to happen when the architecture an image is built on differs from the architecture it is running on. I then remembered that I was building the image on a MacBook with Apple M1 Chip which is based on &lt;a href="https://en.wikipedia.org/wiki/ARM_architecture_family" rel="noopener noreferrer"&gt;ARM64&lt;/a&gt; architecture, and the worker nodes in the EKS cluster I deployed on are based on &lt;a href="https://en.wikipedia.org/wiki/X86" rel="noopener noreferrer"&gt;x86&lt;/a&gt; architecture. &lt;/p&gt;

&lt;p&gt;I had two options to fix the error: create new ARM-based worker nodes or build the image on x86 architecture. I couldn’t create new worker nodes for obvious reasons, so I had to figure out how to build x86 images on my Apple M1 chip.&lt;/p&gt;

&lt;p&gt;In this article, I will walk you through how I built my application’s Docker image with x86 architecture on an Apple M1 chip using &lt;a href="https://docs.docker.com/build/buildx/" rel="noopener noreferrer"&gt;Docker Buildx&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Buildx?
&lt;/h2&gt;

&lt;p&gt;Docker Buildx is a CLI plugin that extends the docker command. Docker Buildx provides the same user experience as &lt;code&gt;docker build&lt;/code&gt; with many new features like the ability to specify the target architecture for which Docker should build the image. These new features are made possible with the help of the &lt;a href="https://github.com/moby/buildkit" rel="noopener noreferrer"&gt;Moby BuildKit&lt;/a&gt; builder toolkit.&lt;/p&gt;

&lt;p&gt;Before you can build x86-64 images on an Apple M1 chip with Docker Buildx, you first need to install Docker Buildx.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Docker Buildx
&lt;/h2&gt;

&lt;p&gt;If you use &lt;a href="https://docs.docker.com/desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt; or have Docker version 20.x, Docker Buildx is already included in it, and you don’t need a separate installation. Verify that you have Docker Buildx with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker buildx version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But if you are like me that use another tool to get the Docker runtime, install Docker Buildx through the binary with the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ARCH=arm64
$ VERSION=v0.8.2 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above commands set temporary environment variables for the architecture and version of the Docker Buildx binary you will download. See the Docker Buildx &lt;a href="https://github.com/docker/buildx/releases/latest" rel="noopener noreferrer"&gt;releases page on GitHub&lt;/a&gt; for the latest version. &lt;/p&gt;

&lt;p&gt;After setting the temporary environment variables, download the binary with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -LO https://github.com/docker/buildx/releases/download/${VERSION}/buildx-${VERSION}.darwin-${ARCH}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After downloading the binary, create a folder in your home directory to hold Docker CLI plugins with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir -p ~/.docker/cli-plugins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then move the binary to the Docker CLI plugins folder with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mv buildx-${VERSION}.darwin-${ARCH} ~/.docker/cli-plugins/docker-buildx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After that, make the binary executable with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ chmod +x ~/.docker/cli-plugins/docker-buildx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To verify the installation, run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker buildx version 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Building x86-64 images on an Apple M1 chip with Docker Buildx
&lt;/h2&gt;

&lt;p&gt;After installing Docker Buildx, you can now easily build your application image to x86-64 on an Apple M1 chip with this command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker buildx build --platform=linux/amd64 -t &amp;lt;image-name&amp;gt; .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;buildx&lt;/code&gt; builds the image using the BuildKit engine and does not require the &lt;code&gt;DOCKER_BUILDKIT=1&lt;/code&gt; environment variable to start the builds.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;--platform&lt;/code&gt; flag specifies the target architecture (platform) to build the image for. In this case, &lt;code&gt;linux/amd64&lt;/code&gt;, which is of x86 architecture.&lt;/li&gt;
&lt;li&gt;And the &lt;code&gt;&amp;lt;image-name&amp;gt;&lt;/code&gt; is a placeholder for putting an image tag.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3hpbniki02cp6f9tp66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3hpbniki02cp6f9tp66.png" alt="building the image with buildx" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To verify that Docker built the image to &lt;code&gt;linux/amd64&lt;/code&gt;, use the &lt;code&gt;docker image inspect &amp;lt;image_name&amp;gt;&lt;/code&gt; command as you can see annotated screenshot above. &lt;/p&gt;

&lt;p&gt;The inspect command will display detailed information about the image in JSON format. Scroll down, and you should see the &lt;code&gt;Architecture&lt;/code&gt; and &lt;code&gt;Os&lt;/code&gt; information as in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkh2gxr7kfxommrdob1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkh2gxr7kfxommrdob1w.png" alt="Showing image architecture" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article explored building images based on x86 architecture on an Apple M1 chip using Docker Buildx. There is so much more to learn about Docker Buildx. To learn more, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubesimplify.com/the-secret-gems-behind-building-container-images-enter-buildkit-and-docker-buildx" rel="noopener noreferrer"&gt;The secret gems behind building container images, Enter: BuildKit &amp;amp; Docker Buildx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@artur.klauser/building-multi-architecture-docker-images-with-buildx-27d80f7e2408" rel="noopener noreferrer"&gt;Building Multi-Architecture Docker Images With Buildx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://support.circleci.com/hc/en-us/articles/360058095471-How-To-Use-Docker-Buildx-in-Remote-Docker-" rel="noopener noreferrer"&gt;How To Use Docker Buildx in Remote Docker?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>tutorial</category>
      <category>docker</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
