<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ștefănescu Liviu</title>
    <description>The latest articles on DEV Community by Ștefănescu Liviu (@liviux).</description>
    <link>https://dev.to/liviux</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/liviux"/>
    <language>en</language>
    <item>
      <title>DevOps and SRE: The Dynamic Duo Transforming the Software Development Landscape</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Thu, 13 Apr 2023 08:33:40 +0000</pubDate>
      <link>https://dev.to/liviux/devops-and-sre-the-dynamic-duo-transforming-the-software-development-landscape-5109</link>
      <guid>https://dev.to/liviux/devops-and-sre-the-dynamic-duo-transforming-the-software-development-landscape-5109</guid>
      <description>&lt;h2&gt;
  
  
  An easy-to-understand introduction to DevOps and Site Reliability Engineering for the general audience
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;In the ever-evolving world of software development, two concepts have emerged as vital components for delivering high-quality, reliable software: DevOps and Site Reliability Engineering (SRE). These approaches have revolutionized the way software is built, deployed, and maintained, and their adoption has led to increased efficiency and collaboration across organizations. In this article, we'll explore the main concepts of DevOps and SRE, and explain their significance in a way that's easy for a general audience to understand.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is DevOps?
&lt;/h3&gt;

&lt;p&gt;DevOps, a combination of the words "development" and "operations," is a set of practices and cultural philosophies that bridge the gap between software development and IT operations teams. The goal of DevOps is to create a seamless, collaborative environment where developers and operations teams can work together to deliver high-quality software rapidly and reliably.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Concepts of DevOps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Collaboration: DevOps encourages increased communication and cooperation between development and operations teams, fostering a shared understanding of goals and breaking down silos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Continuous Integration (CI): This practice involves regularly merging code changes into a central repository, followed by automated building and testing. CI helps detect integration issues early and speeds up the development process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Continuous Delivery (CD): CD is the process of automatically deploying code changes to production-like environments after they pass testing, making it easier to release new features and bug fixes quickly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infrastructure as Code (IaC): IaC is the management of infrastructure (such as networks, servers, and storage) through code, which allows for version control, easy rollbacks, and collaboration between team members.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring and Feedback: DevOps emphasizes the importance of monitoring applications and infrastructure to gather insights and feedback, enabling teams to continuously improve processes and address issues proactively.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What is Site Reliability Engineering (SRE)?
&lt;/h3&gt;

&lt;p&gt;Site Reliability Engineering (SRE) is a discipline that combines aspects of software engineering and IT operations to ensure the reliability, availability, and performance of software systems. SREs are responsible for defining service level objectives (SLOs), monitoring system performance, and implementing automated solutions to improve the reliability and efficiency of software systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Concepts of SRE
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Service Level Objectives (SLOs): SLOs are measurable goals that represent the desired level of system reliability, such as uptime, latency, and error rates. SREs work closely with development teams to establish and maintain these objectives.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Error Budgets: An error budget is a defined tolerance for system failures or performance issues. By allocating an error budget, SREs can balance the need for system reliability with the desire to innovate and release new features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automation: SREs focus on automating tasks that are repetitive, error-prone, or time-consuming, freeing up resources to work on more valuable tasks and improving overall system reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring and Observability: SREs use monitoring and observability tools to gain insights into the performance and health of software systems, enabling them to identify potential issues and proactively address them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Blameless Postmortems: When incidents occur, SREs conduct blameless postmortems to review the event, identify the root cause, and implement improvements to prevent future occurrences, fostering a culture of learning and continuous improvement.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Benefits of DevOps and SRE
&lt;/h3&gt;

&lt;p&gt;The adoption of DevOps and SRE practices offers numerous advantages, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Faster Time-to-Market: By streamlining the development and deployment processes, organizations can bring new features and products to market more quickly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improved Collaboration: DevOps and SRE foster better communication and collaboration between development and operations teams, breaking down silos and resulting in more efficient problem-solving and decision-making.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhanced Reliability: By focusing on system reliability and implementing automated solutions, SREs can ensure that software systems are more stable, secure, and resilient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Higher Quality Software: DevOps practices such as CI/CD and automated testing help to catch issues early, leading to higher quality software and fewer defects in production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost Efficiency: By automating tasks and optimizing resource usage, DevOps and SRE can help organizations save time and money, while also reducing the risk of human errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Continuous Improvement: Both DevOps and SRE promote a culture of learning, feedback, and continuous improvement, enabling teams to learn from mistakes and proactively address potential issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;DevOps and Site Reliability Engineering have transformed the way software is developed, deployed, and maintained. By fostering collaboration, streamlining processes, and focusing on reliability, these practices have helped organizations deliver high-quality software more quickly and efficiently. With a better understanding of the main concepts of DevOps and SRE, anyone can appreciate the profound impact these approaches have on the software development landscape.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>sitereliabilityengineering</category>
    </item>
    <item>
      <title>DevOps &amp; SRE Roadmap explained by AI - part 3</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Thu, 06 Apr 2023 16:37:06 +0000</pubDate>
      <link>https://dev.to/liviux/devops-sre-roadmap-explained-by-ai-part-3-11co</link>
      <guid>https://dev.to/liviux/devops-sre-roadmap-explained-by-ai-part-3-11co</guid>
      <description>&lt;p&gt;This part is for what you need to know about Cloud Native tools and principles.&lt;br&gt;
&lt;strong&gt;Cloud native&lt;/strong&gt; is essential for DevOps and SRE roles ☁️🌎 It allows you to build and run scalable applications in modern, dynamic environments 🚀 Cloud native technologies are highly customizable and can be used across a wide range of systems 🌐&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xubuJIxI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fxgqgti4nhk95148glte.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xubuJIxI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fxgqgti4nhk95148glte.jpg" alt="Image description" width="880" height="289"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h-wgE_qD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hv9f7n6glc3r0mqvcajz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h-wgE_qD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hv9f7n6glc3r0mqvcajz.jpg" alt="Image description" width="880" height="788"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i-WnN1AQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6qr2kgxxzu7quhzebpj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i-WnN1AQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6qr2kgxxzu7quhzebpj.jpg" alt="Image description" width="610" height="449"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ib_cOL65--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/scsgy1x0yd2gtdngy1av.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ib_cOL65--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/scsgy1x0yd2gtdngy1av.jpg" alt="Image description" width="456" height="806"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👷‍♂️👷‍♀️ &lt;strong&gt;Infrastructure as code&lt;/strong&gt; (IaC) is a DevOps methodology that uses versioning with a descriptive model to define and deploy infrastructure such as networks, virtual machines, load balancers, and connection topologies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WEh88_7g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4nqux7gi2r095w46ao0k.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WEh88_7g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4nqux7gi2r095w46ao0k.jpg" alt="Image description" width="880" height="366"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gGAaacrK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6o4uy892kh2v8u2n2bo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gGAaacrK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6o4uy892kh2v8u2n2bo.jpg" alt="Image description" width="880" height="412"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zPMxW9Sf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sh9x161kq5l9iku8e1fb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zPMxW9Sf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sh9x161kq5l9iku8e1fb.jpg" alt="Image description" width="880" height="425"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---2I1c7r1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kt58mx3w2ohqynqfsy2g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---2I1c7r1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kt58mx3w2ohqynqfsy2g.jpg" alt="Image description" width="880" height="1616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👩‍💻👨‍💻 &lt;strong&gt;Virtual machines&lt;/strong&gt; are software that emulate hardware and run operating systems. They are useful for DevOps and SRE roles because they allow testing, deploying and scaling applications in different environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t5kTEMTs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a0y8bghf0goh9bze4uep.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t5kTEMTs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a0y8bghf0goh9bze4uep.jpg" alt="Image description" width="880" height="352"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v7kJqWxt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2dkw30trlr3g17mr8gf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v7kJqWxt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2dkw30trlr3g17mr8gf.jpg" alt="Image description" width="880" height="540"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--edbkD6nT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w46t2lp0u09n66drcw23.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--edbkD6nT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w46t2lp0u09n66drcw23.jpg" alt="Image description" width="880" height="1441"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6uZ1n25C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4xeckaqwgbm51se5wpi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6uZ1n25C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4xeckaqwgbm51se5wpi.jpg" alt="Image description" width="880" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🌩️🌐👩‍💻 &lt;strong&gt;Cloud Computing&lt;/strong&gt; Private cloud: dedicated resources for one organization, more control and security, higher cost and maintenance. Public cloud: shared resources for multiple organizations, less control and security, lower cost and maintenance. Choose wisely! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ylZCkF06--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fcw34kl02w0r7zfou9u4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ylZCkF06--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fcw34kl02w0r7zfou9u4.jpg" alt="Image description" width="880" height="705"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B7efRWX3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lbbndz1hl3vwiyixy57.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B7efRWX3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lbbndz1hl3vwiyixy57.jpg" alt="Image description" width="880" height="237"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wN5yeEUS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8mx5rzv7bjryhdxpfmqd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wN5yeEUS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8mx5rzv7bjryhdxpfmqd.jpg" alt="Image description" width="880" height="716"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ndT8jvsJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5awch37s4fx8rj6d12a7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ndT8jvsJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5awch37s4fx8rj6d12a7.jpg" alt="Image description" width="880" height="1385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🤖🔧👨‍💻  &lt;strong&gt;Automation&lt;/strong&gt;: using software to perform tasks that are repetitive, error-prone, or time-consuming, such as testing, deployment, monitoring, etc. &lt;strong&gt;Configuration management&lt;/strong&gt;: using tools to manage the state and behavior of systems and applications, such as Ansible, Puppet, Chef, etc. Work smarter, not harder!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ef_ZdI6g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ydnyrpzvjk5zy6x763ul.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ef_ZdI6g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ydnyrpzvjk5zy6x763ul.jpg" alt="Image description" width="880" height="736"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--faPF7cMZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t05o43dkrbyxovbdawh6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--faPF7cMZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t05o43dkrbyxovbdawh6.jpg" alt="Image description" width="880" height="613"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P5Z6sqR0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1u0gtxilqsft5yto69qr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P5Z6sqR0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1u0gtxilqsft5yto69qr.jpg" alt="Image description" width="880" height="1581"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lJCII--a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebtyf1b60uorvhl1r8v4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lJCII--a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebtyf1b60uorvhl1r8v4.jpg" alt="Image description" width="880" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐳🚀👩‍💻 &lt;strong&gt;Container Runtime&lt;/strong&gt; is the software layer that enables containers to run on a host machine. It provides an interface between the container engine and the operating system. It also manages the container lifecycle and resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JXFvI5Ck--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cn2deh4g21htj6lphc0p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JXFvI5Ck--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cn2deh4g21htj6lphc0p.jpg" alt="Image description" width="880" height="518"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pYd-NVgt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w2pxcvlbk5x57b2eqz5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pYd-NVgt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w2pxcvlbk5x57b2eqz5.jpg" alt="Image description" width="880" height="1336"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AqtP4-ZP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1nf8g9xh288itnm48ny.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AqtP4-ZP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1nf8g9xh288itnm48ny.jpg" alt="Image description" width="880" height="315"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lm923T_k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2e1w2joip32oyb4id2u9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lm923T_k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2e1w2joip32oyb4id2u9.jpg" alt="Image description" width="880" height="676"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🌐🔌👩‍💻 Modern &lt;strong&gt;API Technologies&lt;/strong&gt; are the tools and methods that enable developers to create, test, document, and deploy APIs. They include frameworks, protocols, standards, and platforms that facilitate API development and integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rWO5CANl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fz0t5mluul8521zeuty.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rWO5CANl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fz0t5mluul8521zeuty.jpg" alt="Image description" width="880" height="366"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qIHag60L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ubymry1vroyl038dmxw8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qIHag60L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ubymry1vroyl038dmxw8.jpg" alt="Image description" width="880" height="769"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t7uzaqC9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fynz239bjn17oxk6cv5m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t7uzaqC9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fynz239bjn17oxk6cv5m.jpg" alt="Image description" width="880" height="782"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8-eSnwGB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89220dmmhmhmtqttopgt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8-eSnwGB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89220dmmhmhmtqttopgt.jpg" alt="Image description" width="880" height="1480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙🛠👩‍💻 &lt;strong&gt;Kubernetes&lt;/strong&gt; is an open-source system for automating deployment, scaling, and management of containerized applications. It orchestrates clusters of nodes and pods that run containers across different environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u4SLUwoR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4t9qn6pn8ntz7fjarks.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u4SLUwoR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4t9qn6pn8ntz7fjarks.jpg" alt="Image description" width="880" height="405"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GPnKJydf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2uwddi84za5v2a8zjrvv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GPnKJydf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2uwddi84za5v2a8zjrvv.jpg" alt="Image description" width="880" height="448"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DhSzqHLt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d6ckfw4lzemqqphvl27b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DhSzqHLt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d6ckfw4lzemqqphvl27b.jpg" alt="Image description" width="880" height="783"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ly2qPhVV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzow1fdck22ytmu81on2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ly2qPhVV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzow1fdck22ytmu81on2.jpg" alt="Image description" width="880" height="1206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🕸🛡👩‍💻 &lt;strong&gt;Service Proxy&lt;/strong&gt; and &lt;strong&gt;Service Mesh&lt;/strong&gt; are technologies that enable communication and security between microservices. A service proxy is a software agent that intercepts and handles network requests. A service mesh is a network of service proxies that manage traffic and policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y03il68D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ry2tljxo1aye11yde4y1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y03il68D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ry2tljxo1aye11yde4y1.jpg" alt="Image description" width="880" height="730"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ws8dy2XJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/np4rm8edogo7gl502p4y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ws8dy2XJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/np4rm8edogo7gl502p4y.jpg" alt="Image description" width="880" height="365"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FcDjRMcj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vo9yuvyimbkhbjgetaen.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FcDjRMcj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vo9yuvyimbkhbjgetaen.jpg" alt="Image description" width="880" height="913"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PXFITat2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95yy7awfwenb9a4qgutb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PXFITat2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95yy7awfwenb9a4qgutb.jpg" alt="Image description" width="880" height="1614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔄🚀👩‍💻 &lt;strong&gt;CI/CD&lt;/strong&gt; is a set of practices that enable faster and reliable delivery of software. CI stands for &lt;strong&gt;continuous integration&lt;/strong&gt; , which means merging code changes frequently and testing them automatically. CD stands for &lt;strong&gt;continuous delivery or deployment&lt;/strong&gt; , which means releasing software to production with minimal manual intervention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I9GZsMyw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg3takgyd3wkasgouy5w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I9GZsMyw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg3takgyd3wkasgouy5w.jpg" alt="Image description" width="880" height="457"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i8vves6O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2fnvw1on2fqt31bne8o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i8vves6O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2fnvw1on2fqt31bne8o.jpg" alt="Image description" width="880" height="646"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T1YwXkr5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t5spfvc5m33z76li7wus.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T1YwXkr5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t5spfvc5m33z76li7wus.jpg" alt="Image description" width="880" height="910"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t5p7umlh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33ebcaoyt4ejzrrsniv6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t5p7umlh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33ebcaoyt4ejzrrsniv6.jpg" alt="Image description" width="880" height="1654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📺📨👩‍💻 &lt;strong&gt;Streaming and Messaging&lt;/strong&gt; are techniques that enable asynchronous and real-time data processing for app development. Streaming is the continuous ingestion and analysis of data from various sources. Messaging is the exchange of data between applications or services via a broker or a queue. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G4Ar-Ewm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yuy1s4w96su8jb55i702.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G4Ar-Ewm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yuy1s4w96su8jb55i702.jpg" alt="Image description" width="880" height="452"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dZqb84xH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8ueutlsr06wtzy4s5g2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dZqb84xH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8ueutlsr06wtzy4s5g2.jpg" alt="Image description" width="880" height="749"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NtoG29c6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xy0nre4wthwfj8jeswj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NtoG29c6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xy0nre4wthwfj8jeswj.jpg" alt="Image description" width="880" height="878"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b564moTz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p09e9ra2m7yn0ppukrgf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b564moTz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p09e9ra2m7yn0ppukrgf.jpg" alt="Image description" width="880" height="1515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👀📊👩‍💻 &lt;strong&gt;Observability&lt;/strong&gt; is the ability to monitor and understand the internal state and behavior of a system based on the external outputs. It involves collecting and analyzing metrics, logs, and traces from various components and sources. Observability helps to identify and troubleshoot issues, optimize performance, and improve reliability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ChO9dB6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5h3wmtzwwjnegeub3bc0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ChO9dB6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5h3wmtzwwjnegeub3bc0.jpg" alt="Image description" width="880" height="412"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pFvhpKkX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vyt4gvsdqfslplciutpb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pFvhpKkX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vyt4gvsdqfslplciutpb.jpg" alt="Image description" width="880" height="599"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Khsa4zrk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0sdwc8yj9wdyeo2uelcb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Khsa4zrk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0sdwc8yj9wdyeo2uelcb.jpg" alt="Image description" width="880" height="640"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z6g-fIAf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bmtzxd8srx9bbfz85881.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z6g-fIAf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bmtzxd8srx9bbfz85881.jpg" alt="Image description" width="880" height="1374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🚀🌩️🔥 &lt;strong&gt;Serverless&lt;/strong&gt; is a cloud computing model that allows developers to run code without provisioning or managing servers. It is ideal for DevOps and SRE roles who want to focus on business logic, scalability and cost-efficiency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BloUKgBH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/taqi9ivzxbjd7lhats2g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BloUKgBH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/taqi9ivzxbjd7lhats2g.jpg" alt="Image description" width="880" height="1291"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ClG3Nz0q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ixuzctcil4x3ijgbuixm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ClG3Nz0q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ixuzctcil4x3ijgbuixm.jpg" alt="Image description" width="880" height="346"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s----gl1TDH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dy6xgklg7csqareua8ed.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s----gl1TDH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dy6xgklg7csqareua8ed.jpg" alt="Image description" width="880" height="1350"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y75J4cfu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yxpci84nhh212mo21kal.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y75J4cfu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yxpci84nhh212mo21kal.jpg" alt="Image description" width="880" height="623"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all for this part - Cloud Native.&lt;br&gt;
Next part is NON SRE/DEVOPS, a small one so it's below.&lt;/p&gt;

&lt;h2&gt;
  
  
  NON DevOps/SRE
&lt;/h2&gt;

&lt;p&gt;There is no such a thing as NON-SRE/DevOps part in a SRE/DevOps guide, what I meant is some additional knowledge that you need or don't depending on your situation.&lt;br&gt;
I'm a telecommunications engineer so I know that pretty well. You definitely need to learn some &lt;strong&gt;networking&lt;/strong&gt;.&lt;br&gt;
Extra things that I (and maybe you) know or want to understand better are &lt;strong&gt;cryptography&lt;/strong&gt;, project management (mostly &lt;strong&gt;Agile&lt;/strong&gt;), licencing and important Organisation (like CNCF, FSF, etc.). As I'm a huge Bitcoin only fan I know some things around blockchains too.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>ai</category>
    </item>
    <item>
      <title>DevOps &amp; SRE Roadmap explained by AI - part 2</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Wed, 05 Apr 2023 16:33:28 +0000</pubDate>
      <link>https://dev.to/liviux/devops-sre-roadmap-explained-by-ai-part-2-12gd</link>
      <guid>https://dev.to/liviux/devops-sre-roadmap-explained-by-ai-part-2-12gd</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wxZywfie--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tztg0hlese4f8uyustgo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wxZywfie--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tztg0hlese4f8uyustgo.jpg" alt="Image description" width="880" height="501"&gt;&lt;/a&gt;&lt;br&gt;
This part is for what you need to know about &lt;strong&gt;Operating Systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r3q6xDQb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/os14fz8v8itq13h0wrce.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r3q6xDQb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/os14fz8v8itq13h0wrce.jpg" alt="Image description" width="626" height="650"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---9MF4GQc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbmksmsr90zrn7nr5g09.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---9MF4GQc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbmksmsr90zrn7nr5g09.jpg" alt="Image description" width="462" height="781"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--viNb_m5D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1honmn44lh8q9geiayt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--viNb_m5D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1honmn44lh8q9geiayt.jpg" alt="Image description" width="880" height="608"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q_7o3Tas--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bed5jehys8f6vfet8ozt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q_7o3Tas--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bed5jehys8f6vfet8ozt.jpg" alt="Image description" width="880" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will need to understand some &lt;strong&gt;OS fundamentals&lt;/strong&gt; . And if you're in a very rare case of a company that is Windows exclusive you will only need to live in a Linux terminal 99% of your time. So learn only Linux and learn it well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linux&lt;/strong&gt; is a powerful open-source operating system that’s essential for DevOps and SRE roles 🐧🔧 It provides a stable, secure, and customizable platform for software development, deployment, and maintenance 💻 Linux is highly scalable and can be used across a wide range of systems 🚀&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MRE57xpR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tq5zhwjob5s1omzgfo32.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MRE57xpR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tq5zhwjob5s1omzgfo32.jpg" alt="Image description" width="880" height="523"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a69EhKJ0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ncw7vm1t1g4mc71dv81.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a69EhKJ0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ncw7vm1t1g4mc71dv81.jpg" alt="Image description" width="880" height="273"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_GlK1vEj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ky35xj9gqa3wiqlkssr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_GlK1vEj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ky35xj9gqa3wiqlkssr.jpg" alt="Image description" width="614" height="599"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sl91V6fA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4llloolrcmnb87qtkngs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sl91V6fA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4llloolrcmnb87qtkngs.jpg" alt="Image description" width="444" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linux commands&lt;/strong&gt; are essential for DevOps and SRE roles 🔧💻 They allow you to manage hardware and software resources, automate tasks, and deploy software with confidence 🚀 Linux commands are highly customizable and can be used across a wide range of systems 🌎&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ncdl5Mlx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrz46g2clalrnrq1twyv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ncdl5Mlx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrz46g2clalrnrq1twyv.jpg" alt="Image description" width="880" height="356"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gUwBg599--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kexel5vx4zn51trfo1j8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gUwBg599--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kexel5vx4zn51trfo1j8.jpg" alt="Image description" width="880" height="865"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5v_9TUFP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8vx20t3elafjike2ays0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5v_9TUFP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8vx20t3elafjike2ays0.jpg" alt="Image description" width="456" height="646"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jozSpmER--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wfqzoxf6jtkjhpsqr908.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jozSpmER--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wfqzoxf6jtkjhpsqr908.jpg" alt="Image description" width="622" height="860"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bash scripting&lt;/strong&gt; is essential for DevOps and SRE roles 📜💻 It allows you to automate tasks, manage systems, and deploy software with ease 🚀 Bash scripts are highly customizable and can be used across a wide range of systems 🌎 &lt;br&gt;
(lol AI really likes to write &lt;em&gt;can be used across a wide range of systems 🌎&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XmKlKeHn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/od0gt97gti6wafhhy9l1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XmKlKeHn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/od0gt97gti6wafhhy9l1.jpg" alt="Image description" width="880" height="694"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VRhX-7z1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3kiq5am1r7h79895zl4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VRhX-7z1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3kiq5am1r7h79895zl4.jpg" alt="Image description" width="880" height="428"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EWJNxrOB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ondwqrhlbkeh1rce13za.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EWJNxrOB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ondwqrhlbkeh1rce13za.jpg" alt="Image description" width="629" height="883"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tokohpoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4by9hlq1r8cx0hfqfpm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tokohpoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4by9hlq1r8cx0hfqfpm.jpg" alt="Image description" width="452" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Essentially you need to know Linux very well!&lt;/p&gt;

&lt;p&gt;This is all for Operating Systems.&lt;br&gt;
Next part is the biggest one - Cloud Native.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8</category>
      <category>ai</category>
    </item>
    <item>
      <title>DevOps &amp; SRE Roadmap explained by AI - part 1</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Wed, 05 Apr 2023 16:04:54 +0000</pubDate>
      <link>https://dev.to/liviux/devops-sre-roadmap-explained-by-ai-11p6</link>
      <guid>https://dev.to/liviux/devops-sre-roadmap-explained-by-ai-11p6</guid>
      <description>&lt;h2&gt;
  
  
  &lt;em&gt;Introduction&lt;/em&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;(you can view the screenshots in a better resolution on the twitter thread &lt;a href="https://twitter.com/liviusa/status/1643662028266000384"&gt;here&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hey there, budding tech enthusiasts! Welcome to the exciting world of DevOps! In this article, we'll walk you through a roadmap to kickstart your DevOps journey, and with a little help from our AI friend, we'll demystify key concepts and techniques that you'll need along the way. So, buckle up, and let's dive into the realm where development and operations join hands to deliver the best software experience possible!&lt;/p&gt;

&lt;p&gt;No one knows everything from a full roadmap including the one made by me. I made it a couple of years ago to start my new #DevOps role and it's a WIP. You only need to understand most of the notions and to really know a few of them. &lt;br&gt;
My roadmap has 4 parts. And every part and sub-part has something below it. The full circles are a must and dotted circles are more important.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E8cz1lq9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3tj4fq639rcs8i03mkq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E8cz1lq9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3tj4fq639rcs8i03mkq.jpg" alt="roadmap" width="880" height="501"&gt;&lt;/a&gt;&lt;br&gt;
Higher resolution &lt;a&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'll use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ChatGPT from OpenAI &lt;/li&gt;
&lt;li&gt;Claude from AnthropicAI &lt;/li&gt;
&lt;li&gt;Bing Chat from bing &lt;/li&gt;
&lt;li&gt;Bard from Google 
to explain some DevOps and SRE concepts in an easy way and in a more complex one too. And what tools to know for each. So let's start with the first part&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;DEVELOPMENT&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;While it's not a bad thing to know a language, it's not a must. You will need how to write some scripts, but now with #GPT4 and #Copilot you can make easy scripts in seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lzu6ILtT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vdgsrfqdz1bwn8ml6rb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lzu6ILtT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vdgsrfqdz1bwn8ml6rb.jpg" alt="Image description" width="880" height="357"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8mFIGvxM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22xx9ijw4vhyt33e5tlz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8mFIGvxM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22xx9ijw4vhyt33e5tlz.jpg" alt="Image description" width="880" height="510"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gihHRMxf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa37l6z29wj54qfctyth.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gihHRMxf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa37l6z29wj54qfctyth.jpg" alt="Image description" width="630" height="682"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G0we4cxi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yg9sg5j31t2c7scamh73.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G0we4cxi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yg9sg5j31t2c7scamh73.jpg" alt="Image description" width="451" height="754"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VCS&lt;/strong&gt;&lt;br&gt;
Version control is a practice of tracking and managing changes to software code that helps high performing development and DevOps teams prosper 🚀👨‍💻👩‍💻. It allows developers to move faster and preserve efficiency as the team scales 📈👥&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xFsa5Zo4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2wyctko8wnql01c6xmu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xFsa5Zo4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2wyctko8wnql01c6xmu.jpg" alt="Image description" width="457" height="827"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3td_msaQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ae1cv34lhnd0pz0rzt1k.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3td_msaQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ae1cv34lhnd0pz0rzt1k.jpg" alt="Image description" width="626" height="634"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IGMZVv5g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3bbh05tl7wjijdohv1df.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IGMZVv5g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3bbh05tl7wjijdohv1df.jpg" alt="Image description" width="880" height="374"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zt6SZQFa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3jn0huv59k5t0jdemso.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zt6SZQFa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3jn0huv59k5t0jdemso.jpg" alt="Image description" width="880" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployments&lt;/strong&gt; are a process of deploying applications into production environments in a consistent and reliable way 🚀👨‍💻👩‍💻. It enables faster development of new products and easier maintenance of existing deployments 📈👥 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QKleb123--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7nuw4k248w17ohr0nyp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QKleb123--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7nuw4k248w17ohr0nyp.jpg" alt="Image description" width="630" height="619"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iM2lCxZt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ze2tcm3t9b1j12z7c614.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iM2lCxZt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ze2tcm3t9b1j12z7c614.jpg" alt="Image description" width="880" height="549"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mHE3pWAK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bd7o8qldwyhydxwuqonm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mHE3pWAK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bd7o8qldwyhydxwuqonm.jpg" alt="Image description" width="880" height="295"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Eo-GwQOE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v33b481ft8e9xeyo21ov.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Eo-GwQOE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v33b481ft8e9xeyo21ov.jpg" alt="Image description" width="469" height="817"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architectural patterns&lt;/strong&gt; are a set of best practices that help you design and build reliable, scalable, and secure applications in the cloud 🌥️🚀 They are essential in a DevOps or SRE role as they enable you to automate multistage DevOps pipelines and achieve continuous delivery 🔁 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z6hjsNfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kn9b4cufjok54z0oxyiu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z6hjsNfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kn9b4cufjok54z0oxyiu.jpg" alt="Image description" width="880" height="330"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Go3nh0Z5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sv77ib5crta4b3urn6fh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Go3nh0Z5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sv77ib5crta4b3urn6fh.jpg" alt="Image description" width="880" height="572"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IMxZ2ejh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avchq7p5mejss2ug700l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IMxZ2ejh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avchq7p5mejss2ug700l.jpg" alt="Image description" width="629" height="679"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nFPl8ei4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ji3d6m2y39pvgf8e8ji0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nFPl8ei4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ji3d6m2y39pvgf8e8ji0.jpg" alt="Image description" width="454" height="791"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Python scripting&lt;/strong&gt; is a powerful tool for DevOps teams 🔧🐍 It’s used for automating repetitive tasks, infrastructure provisioning, and API-driven deployments 🚀 Python’s flexibility and accessibility make it a great fit for this job, enabling teams to build web applications and data visualizations 🌐 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tCZ_Zugz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/52mskahk8os1mek2h7u3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tCZ_Zugz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/52mskahk8os1mek2h7u3.jpg" alt="Image description" width="880" height="623"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PsWxm3Gw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u736scahvku4o4oiqc8f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PsWxm3Gw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u736scahvku4o4oiqc8f.jpg" alt="Image description" width="880" height="617"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5askiDl1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zoa8rogl05ci7jr2py1r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5askiDl1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zoa8rogl05ci7jr2py1r.jpg" alt="Image description" width="628" height="706"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yLu7Nvmq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nvhsw15ztwbdeme3ke27.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yLu7Nvmq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nvhsw15ztwbdeme3ke27.jpg" alt="Image description" width="476" height="714"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all for the first part - &lt;strong&gt;Development&lt;/strong&gt;.&lt;br&gt;
Next part is &lt;strong&gt;Operating systems&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>ai</category>
    </item>
    <item>
      <title>Kubernetes: The Revolution in Managing Digital Applications Made Simple</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Wed, 05 Apr 2023 07:14:53 +0000</pubDate>
      <link>https://dev.to/liviux/kubernetes-the-revolution-in-managing-digital-applications-made-simple-f49</link>
      <guid>https://dev.to/liviux/kubernetes-the-revolution-in-managing-digital-applications-made-simple-f49</guid>
      <description>&lt;p&gt;&lt;em&gt;A beginner's guide to understanding Kubernetes and its impact on the digital world&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the fast-paced world of technology, it's essential to understand the driving forces behind some of the most cutting-edge innovations. One such force, Kubernetes, has revolutionized the way digital applications are managed and deployed. But what exactly is Kubernetes, and how does it benefit both tech giants and small businesses alike? In this article, we'll break down the main concepts of Kubernetes in a way that's easy for a general audience to understand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Kubernetes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes (often abbreviated as K8s) is an open-source platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and later donated to the Cloud Native Computing Foundation (CNCF). Containers, which are lightweight, self-contained software packages, enable developers to build, test, and deploy applications more efficiently and reliably.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---yFxjUKW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00sym5r2pv9pwkbwrk91.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---yFxjUKW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00sym5r2pv9pwkbwrk91.jpg" alt="kubernetes" width="880" height="880"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Concepts of Kubernetes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers: As mentioned earlier, containers are the fundamental building blocks of Kubernetes. They package an application and its dependencies into a single unit, ensuring that the application runs consistently across different computing environments.&lt;/p&gt;

&lt;p&gt;Nodes: In Kubernetes, a node is a physical or virtual machine that hosts one or more containers. Nodes can be easily added or removed, depending on the required computing resources, making it possible to scale applications quickly and efficiently.&lt;/p&gt;

&lt;p&gt;Clusters: A cluster is a group of nodes working together to run containerized applications. Kubernetes uses clusters to distribute workloads evenly, ensuring that applications are highly available and can recover quickly from failures.&lt;/p&gt;

&lt;p&gt;Pods: A pod is the smallest and most basic unit in the Kubernetes architecture. It represents a single instance of a running application and can contain one or more containers. Pods are designed to be ephemeral, which means they can be easily replaced if they fail or need to be updated.&lt;/p&gt;

&lt;p&gt;Services: A service is a stable network endpoint that provides access to one or more pods running an application. It allows users to interact with the application without needing to know the specific details of the underlying pods or nodes.&lt;/p&gt;

&lt;p&gt;Controllers: Controllers are responsible for maintaining the desired state of the Kubernetes system. They continuously monitor the system and make necessary adjustments to ensure that the actual state matches the desired state. Examples of controllers include the ReplicaSet controller, which ensures that a specified number of replicas of an application are running at all times, and the Deployment controller, which manages updates and rollbacks of applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Benefits of Kubernetes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes offers numerous advantages to businesses and developers, including:&lt;/p&gt;

&lt;p&gt;Scalability: Kubernetes allows applications to scale up or down quickly and easily, depending on demand. This enables businesses to save resources and respond swiftly to changes in the market.&lt;/p&gt;

&lt;p&gt;Portability: Because containers can run consistently across different environments, Kubernetes applications can be easily moved between on-premises, public cloud, or hybrid environments without requiring significant changes.&lt;/p&gt;

&lt;p&gt;High availability: Kubernetes automatically distributes workloads and ensures that applications remain available even if individual components fail, improving overall reliability and uptime.&lt;/p&gt;

&lt;p&gt;Streamlined deployment: Kubernetes simplifies the deployment process, allowing developers to focus on building and improving applications rather than managing complex infrastructure.&lt;/p&gt;

&lt;p&gt;Cost efficiency: By optimizing resource usage and reducing manual intervention, Kubernetes can help organizations save both time and money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes has quickly become the industry standard for container orchestration and application management. Its ability to simplify deployment, ensure high availability, and enable seamless scaling has made it an invaluable tool for businesses and developers alike. By understanding the main concepts of Kubernetes, anyone can appreciate the remarkable impact this technology has on the digital world.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
    </item>
    <item>
      <title>K8s cluster with OCI free-tier and Raspberry Pi4 (part 4)</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Wed, 15 Feb 2023 19:31:21 +0000</pubDate>
      <link>https://dev.to/liviux/k8s-cluster-with-oci-free-tier-and-raspberry-pi4-part-4-1jg7</link>
      <guid>https://dev.to/liviux/k8s-cluster-with-oci-free-tier-and-raspberry-pi4-part-4-1jg7</guid>
      <description>&lt;p&gt;This long read is a multiple part tutorial for building a Kubernetes cluster (using k3s) with 4 x OCI free-tier ARM instances and 4 x Raspberry Pi 4. Plus some applications needed for installation (Terraform and Ansible) and a lot of things installed on the cluster.&lt;br&gt;
&lt;strong&gt;Part 4 is adding applications to the cluster.&lt;br&gt;
GitHub repository is &lt;a href="https://github.com/liviux/k8s-cluster-oci-rpi4" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
I'll just install the apps and access them. But you need to configure to use all of them.&lt;/p&gt;
&lt;h2&gt;
  
  
  Lens
&lt;/h2&gt;

&lt;p&gt;First thing installed is a beautiful dashboard - &lt;a href="https://k8slens.dev/" rel="noopener noreferrer"&gt;Lens&lt;/a&gt;. Install the desktop app on your PC, go to &lt;em&gt;File &amp;gt; Add Cluster&lt;/em&gt;. You have to paste here all that you receive when running &lt;code&gt;kubectl config view --minify --raw&lt;/code&gt; on server. Edit &lt;em&gt;127.0.0.1:6443&lt;/em&gt; from that result with your server IP, in my case &lt;em&gt;10.20.30.1:6443&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  MetalLB
&lt;/h2&gt;

&lt;p&gt;Metal LB will work as our load balancer, it will give an external IP to every service type &lt;em&gt;LoadBalancer&lt;/em&gt;. Install it with &lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml&lt;/code&gt;. Create a file &lt;em&gt;config-metallb.yaml&lt;/em&gt; and write this in it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: metallb.io/v1beta1

kind: IPAddressPool
metadata:
  name: default
  namespace: metallb-system
spec:
  addresses:
  - 192.168.0.150 - 192.168.0.250

---
apiVersion: metallb.io/v1beta1

kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system
spec:
  ipAddressPools:
  - default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit IP range with your values from home network. Apply it with &lt;code&gt;kubectl apply -f config-metallb.yaml&lt;/code&gt;. Now for every time in the rest of the guide you acces a service with node ip (10.20.30.1 -10.20.30.8:port) you could only use the external IP given by metallb.&lt;/p&gt;

&lt;h2&gt;
  
  
  Helm &amp;amp; Arkade
&lt;/h2&gt;

&lt;p&gt;On the server it seems git is not installed. So &lt;code&gt;sudo apt install&lt;/code&gt; git first. Plus KUBECONFIG env variable wasn't configured until now. So add this line to &lt;em&gt;~/.bashrc&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Helm is the package manager for Kubernetes. Installing helm is very easy, just run following commands on server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Arkade is an Open Source marketplace For Kubernetes. Installation is just &lt;code&gt;curl -sLS https://get.arkade.dev | sudo sh&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bash completion
&lt;/h2&gt;

&lt;p&gt;To run commands faster and easier you can use auto-completion for kubectl. Run &lt;code&gt;apt install bash-completion&lt;/code&gt; and then &lt;code&gt;echo 'source &amp;lt;(kubectl completion bash)' &amp;gt;&amp;gt;~/.bashrc&lt;/code&gt;. Now after every kubectl you can hit tab and it will autocomplete for you. Try with a running pod, just write &lt;code&gt;kubectl get pod&lt;/code&gt; and hit TAB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Traefik dashboard
&lt;/h2&gt;

&lt;p&gt;First write a yaml file &lt;em&gt;traefik-crd.yaml&lt;/em&gt; and apply it. It should have this content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    additionalArguments:
      - "--api"
      - "--api.dashboard=true"
      - "--api.insecure=true"
      - "--log.level=DEBUG"
    ports:
      traefik:
        expose: true
    providers:
      kubernetesCRD:
        allowCrossNamespace: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should be available at &lt;em&gt;&lt;a href="http://10.20.30.4:9000/dashboard/#/" rel="noopener noreferrer"&gt;http://10.20.30.4:9000/dashboard/#/&lt;/a&gt;&lt;/em&gt; in your browser (or any ip from those 8 that are nodes).&lt;/p&gt;

&lt;h2&gt;
  
  
  Longhorn
&lt;/h2&gt;

&lt;p&gt;Longhorn is a cloud native distributed block storage for Kubernetes. First create a new &lt;em&gt;install-longhorn.yml&lt;/em&gt; file for Ansible playbook. Paste in it :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: all
  tasks:
  - name: Install some package for Longhorns
    apt:
      name:
        - nfs-common
        - open-iscsi
        - util-linux
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with &lt;code&gt;ansible-playbook install-longhorn.yml -K -b&lt;/code&gt; to install some extra components on nodes. Now run &lt;code&gt;ansible -a "lsblk -f" all&lt;/code&gt; to find what are the names of the drives.&lt;br&gt;
Move to server. Run &lt;code&gt;helm repo add longhorn https://charts.longhorn.io&lt;/code&gt; then &lt;code&gt;helm repo update&lt;/code&gt;&lt;br&gt;
and then &lt;code&gt;helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace&lt;/code&gt;. It will last a while, ~7 minutes. Now create a &lt;em&gt;longhorn-service.yaml&lt;/em&gt; file and paste this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: longhorn-ingress-lb
  namespace: longhorn-system
spec:
  selector:
    app: longhorn-ui
  type: LoadBalancer
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with &lt;code&gt;kubectl apply -f longhorn-service.yaml&lt;/code&gt;. Now you can access the Longhorn Dashboard in your browser in &lt;em&gt;10.20.30.1:port&lt;/em&gt; (or any of your nodes IPs). The port you get it from running &lt;code&gt;bectl describe svc longhorn-ingress-lb -n longhorn-system | grep NodePort&lt;/code&gt;&lt;br&gt;
Now to make Longhorn default StorageClass. Run &lt;code&gt;kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'&lt;/code&gt; and now you should have only one default in &lt;code&gt;kubectl get storageclass&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Portainer
&lt;/h2&gt;

&lt;p&gt;This app is a container management platform. We'll be installed using again helm. Run &lt;code&gt;helm repo add portainer https://portainer.github.io/k8s/&lt;/code&gt; and then &lt;code&gt;helm repo update&lt;/code&gt;.&lt;br&gt;
Now just run &lt;code&gt;helm install --create-namespace -n portainer portainer portainer/portainer&lt;/code&gt;. You can acces portainer UI from 10.20.30.1:30777 (or any other IP from your Wireguard network 10.20.30.1 - 10.20.30.8 :30777). It uses 10 GB, check with &lt;code&gt;kubectl get pvc -n portainer&lt;/code&gt; (you can check in your Longhorn Dashboard the new volume created). There you will create a user and password. This is what I have after clicking on &lt;em&gt;Get Started&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4ggrycif2pqk6n4a1o5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4ggrycif2pqk6n4a1o5.png" alt="portainer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ArgoCD
&lt;/h2&gt;

&lt;p&gt;Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.&lt;br&gt;
Installation very easy with &lt;code&gt;kubectl create namespace argocd&lt;/code&gt; then &lt;code&gt;kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml&lt;/code&gt; and wait a few minutes, or check progress with &lt;code&gt;kubectl get all -n argocd&lt;/code&gt;. Now to access the UI run &lt;code&gt;kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'&lt;/code&gt;. Find your port with &lt;code&gt;kubectl describe service/argocd-server -n argocd | grep NodePort&lt;/code&gt; and access the UI from 10.20.30.1:port (or another IP form your network). User is _admin _and password is stored in &lt;code&gt;kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus &amp;amp; Grafana
&lt;/h2&gt;

&lt;p&gt;Prometheus is probably the best monitoring system out there and Grafana will be used as it's dashboard. Will be installed from the ArgoCD UI using a official helm chart &lt;em&gt;kube-prometheus-stack&lt;/em&gt; . Open &lt;em&gt;Applications&lt;/em&gt; and click &lt;em&gt;New App&lt;/em&gt;. Edit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;App Name : &lt;em&gt;kube-prometheus-stack&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Project Name : &lt;em&gt;default&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Sync Policy : &lt;em&gt;Automatic&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;check &lt;em&gt;Auto Create Namespace&lt;/em&gt; and &lt;em&gt;Server Side Apply&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Repository URL : &lt;em&gt;&lt;a href="https://prometheus-community.github.io/helm-charts" rel="noopener noreferrer"&gt;https://prometheus-community.github.io/helm-charts&lt;/a&gt;&lt;/em&gt; (select &lt;em&gt;Helm&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;Chart : &lt;em&gt;kube-prometheus-stack&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Cluster URL : &lt;em&gt;&lt;a href="https://kubernetes.default.svc" rel="noopener noreferrer"&gt;https://kubernetes.default.svc&lt;/a&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Namespace : &lt;em&gt;kube-prometheus-stack&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;alertmanager.service.type : &lt;em&gt;LoadBalancer&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;prometheus.service.type : &lt;em&gt;LoadBalancer&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;prometheusOperator.service.type : &lt;em&gt;LoadBalancer&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;grafana.ingress.enabled : &lt;em&gt;true&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now hit &lt;em&gt;Create&lt;/em&gt;. We configured the services as &lt;em&gt;LoadBalancer&lt;/em&gt;, because by default they are &lt;em&gt;ClusterIP&lt;/em&gt; and if you wanted to access them you have to do port-forwarding every time. You can acces the Prometheus UI from node:port (port you get it with &lt;code&gt;kubectl describe svc kube-prometheus-stack-prometheus -n kube-prometheus-stack | grep NodePort&lt;/code&gt;). The same for Prometheus Alert Manager (get port with &lt;code&gt;kubectl describe svc kube-prometheus-stack-alertmanager -n kube-prometheus-stack | grep NodePort&lt;/code&gt;). For Grafana Dashboard you need to go to any &lt;em&gt;nodeIP/kube-prometheus-stack-grafana:80&lt;/em&gt;. User is admin and password get it with &lt;code&gt;kubectl get secret --namespace kube-prometheus-stack kube-prometheus-stack-grafana -o jsonpath='{.data.admin-password}' | base64 -d&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>raspberrypi</category>
      <category>oci</category>
    </item>
    <item>
      <title>K8s cluster with OCI free-tier and Raspberry Pi4 (part 3)</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Wed, 15 Feb 2023 19:30:55 +0000</pubDate>
      <link>https://dev.to/liviux/k8s-cluster-with-oci-free-tier-and-raspberry-pi4-part-3-4hid</link>
      <guid>https://dev.to/liviux/k8s-cluster-with-oci-free-tier-and-raspberry-pi4-part-3-4hid</guid>
      <description>&lt;p&gt;This long read is a multiple part tutorial for building a Kubernetes cluster (using k3s) with 4 x OCI free-tier ARM instances and 4 x Raspberry Pi 4. Plus some applications needed for installation (Terraform and Ansible) and a lot of things installed on the cluster.&lt;br&gt;
&lt;strong&gt;Part 3 is linking the RPi4 to the k3s cluster on OCI.&lt;br&gt;
GitHub repository is &lt;a href="https://github.com/liviux/k8s-cluster-oci-rpi4" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Preparing
&lt;/h2&gt;

&lt;p&gt;At this moment I added the OCI machines to the &lt;em&gt;C:\Windows\System32\drivers\etc\hosts&lt;/em&gt; file (WSL reads this file and updates it in it's &lt;em&gt;/etc/hosts&lt;/em&gt; file). Now my hosts looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
192.168.0.201   rpi4-1
192.168.0.202   rpi4-2
192.168.0.203   rpi4-3
192.168.0.204   rpi4-4
140.111.111.213 oci1
140.112.112.35  oci2
152.113.113.23  oci3
140.114.114.22  oci4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And I added them to the Ansible file too (&lt;em&gt;/etc/ansible/hosts&lt;/em&gt;). Now this file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[big]
rpi4-1  ansible_connection=ssh
[small]
rpi4-2  ansible_connection=ssh
rpi4-3  ansible_connection=ssh
rpi4-4  ansible_connection=ssh
[home:children]
big
small
[ocis]
oci1    ansible_connection=ssh ansible_user=ubuntu
[ociw]
oci2   ansible_connection=ssh ansible_user=ubuntu
oci3   ansible_connection=ssh ansible_user=ubuntu
oci4   ansible_connection=ssh ansible_user=ubuntu
[oci:children]
ocis
ociw
[workers:children]
big
small
ociw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is not the best naming convention, but it works. Ansible reservers the naming &lt;em&gt;all&lt;/em&gt; so if I want to interact with all the objects I can always use &lt;code&gt;ansible -m command all&lt;/code&gt;. Test it using &lt;code&gt;ansible -a "uname -a" all&lt;/code&gt;. You should receive 8 responses with every Linux installed. Now you can even re-run the update ansible playbook created last part, to update OCI instances too.   &lt;/p&gt;

&lt;p&gt;K3s can work in multiple ways (&lt;a href="https://docs.k3s.io/architecture" rel="noopener noreferrer"&gt;here&lt;/a&gt;), but for our tutorial we picked &lt;em&gt;High Availability with Embedded DB&lt;/em&gt; architecture. This one runs etcd instead of the default sqlite3 and so it's important to have an odd number of server nodes (from official documentation: "&lt;em&gt;An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1.&lt;/em&gt;").&lt;br&gt;
Initially this cluster was planned with 3 server nodes, 2 from OCI and 1 from RPi4. But after reading issues &lt;a href="https://github.com/k3s-io/k3s/issues/2850" rel="noopener noreferrer"&gt;1&lt;/a&gt; and &lt;a href="https://github.com/k3s-io/k3s/issues/6297" rel="noopener noreferrer"&gt;2&lt;/a&gt; on Github, there are problems with etcd being on server nodes on different networks. So this cluster will have &lt;strong&gt;1 server node&lt;/strong&gt; (this is how k3s names their master nodes): from OCI and &lt;strong&gt;7 agent nodes&lt;/strong&gt; (this is how k3s names their worker nodes): 3 from OCI and 4 from RPi4.&lt;br&gt;
First we need to free some ports, so the OCI cluster can communicate with the RPi cluster. Go to &lt;em&gt;VCN &amp;gt; Security List&lt;/em&gt;. You need to click on &lt;em&gt;Add Ingress Rule&lt;/em&gt;. While I could only open the needed ports for k3s networking (listed &lt;a href="https://docs.k3s.io/installation/requirements#networking" rel="noopener noreferrer"&gt;here&lt;/a&gt;), I decided to open all OCI ports toward my public IP only, as there is no risk involved here. So in &lt;em&gt;IP Protocol&lt;/em&gt; select &lt;em&gt;All Protocols&lt;/em&gt;. Now you can test if everything if it worked by ssh to any RPi4 and try to ping any OCI machine or ssh to it or try another port.&lt;/p&gt;
&lt;h2&gt;
  
  
  Netmaker
&lt;/h2&gt;

&lt;p&gt;Now to link all of them together.&lt;br&gt;
We will create a VPN between all of them (and if you want to, plus local machine, plus VPS) using &lt;strong&gt;Wireguard&lt;/strong&gt;. While Wireguard is not the hardest app to install and configure, there's an wonderful app that does almost everything by itself - Netmaker.&lt;br&gt;
On your VPS, or your local machine (if it has a static IP) run &lt;code&gt;sudo wget -qO /root/nm-quick-interactive.sh https://raw.githubusercontent.com/gravitl/netmaker/master/scripts/nm-quick-interactive.sh &amp;amp;&amp;amp; sudo chmod +x /root/nm-quick-interactive.sh &amp;amp;&amp;amp; sudo /root/nm-quick-interactive.sh&lt;/code&gt; and follow all the steps. Select Community Edition (for max 50 nodes) and for the rest pick auto.&lt;br&gt;
Now you will have a dashboard at a auto-generated domain. Open that link that you received at the end of the installation in a browser and create a user and password.&lt;br&gt;
It should have created for you a network. Open &lt;em&gt;Network&lt;/em&gt; tab and then open the new network created. If you're ok with it, that's great. I changed the CIDR to something more fancier &lt;em&gt;10.20.30.0/24&lt;/em&gt; and activated &lt;em&gt;UDP Hole Punching&lt;/em&gt; for better connectivity over NAT. Now go to &lt;em&gt;Access Key Tab&lt;/em&gt;, select your network and there you should have all your keys to connect.&lt;br&gt;
Netclient, the client for every machine, needs &lt;em&gt;wireguard&lt;/em&gt; and &lt;em&gt;systemd&lt;/em&gt; installed. Create a new ansible playbook &lt;em&gt;wireguard_install.yml&lt;/em&gt; and paste this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: all
  tasks:
  - name: Install wireguard
    apt:
      name:
        - wireguard
...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run &lt;code&gt;ansible-playbook wireguard_install.yml -K -b&lt;/code&gt;. To check everything is ok until now run &lt;code&gt;ansible -a "wg --version" all&lt;/code&gt; and then &lt;code&gt;ansible -a "systemd --version" all&lt;/code&gt;.&lt;br&gt;
Create a new file &lt;em&gt;netclient_install.yml&lt;/em&gt; and add this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: server
  tasks:
  - name: Add the Netmaker GPG key
    shell: curl -sL 'https://apt.netmaker.org/gpg.key' | sudo tee /etc/apt/trusted.gpg.d/netclient.asc

  - name: Add the Netmaker repository
    shell: curl -sL 'https://apt.netmaker.org/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/netclient.list

  - name: Update the package list
    shell: apt update

  - name: Install netclient
    shell: apt install netclient
...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run it as usual &lt;code&gt;ansible-playbook netclient_install.yml -K -b&lt;/code&gt;. This will install &lt;em&gt;netclient&lt;/em&gt; on all hosts. To check, run &lt;code&gt;ansible -a "netclient --version" all&lt;/code&gt;.&lt;br&gt;
Last step is easy. Just run &lt;code&gt;ansible -a "netclient join -t YOURTOKEN" -b -K&lt;/code&gt;. For the part in brackets, copy your &lt;em&gt;Join Command&lt;/em&gt; from &lt;em&gt;Netmaker Dashboard &amp;gt; Access Key&lt;/em&gt;. Now all hosts will share a network. This is mine, 11 machines (4 RPi4, 4 OCI instances, my VPS, my WSL and my Windows machine; last 3 are not needed).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F587lg95xqlle3e2i573n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F587lg95xqlle3e2i573n.png" alt="netmaker network"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ssh to the OCI server and run: first &lt;code&gt;sudo systemctl stop k3s&lt;/code&gt;, then &lt;code&gt;sudo rm -rf /var/lib/rancher/k3s/server/db/etcd&lt;/code&gt; and then reinstall but this time with &lt;code&gt;curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--flannel-iface=nm-netmaker" INSTALL_K3S_CHANNEL=latest sh -&lt;/code&gt;.&lt;br&gt;
For agents will make an ansible playbook &lt;em&gt;workers_link.yml&lt;/em&gt; with following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: workers
  tasks:
  - name: Install k3s on workers and link to server node
    shell: curl -sfL https://get.k3s.io | K3S_URL=https://10.20.30.1:6443 K3S_TOKEN=MYTOKEN INSTALL_K3S_EXEC=--"flannel-iface=nm-netmaker" INSTALL_K3S_CHANNEL=latest sh -v
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You have to paste the content from file on server &lt;code&gt;sudo cat /var/lib/rancher/k3s/server/node-token&lt;/code&gt; as MYTOKEN, and change ip address of server if you have another. Now run it with &lt;code&gt;ansible-playbook ~/ansible/link/workers_link.yml -K -b&lt;/code&gt;.&lt;br&gt;
Finally over. Go back to server node, run &lt;code&gt;sudo kubectl get nodes -owide&lt;/code&gt; and you should have 8 results there, 1 master node and 7 worker nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;Netmaker from &lt;a href="https://github.com/gravitl/netmaker" rel="noopener noreferrer"&gt;here&lt;/a&gt; and documentation &lt;a href="https://netmaker.readthedocs.io/en/master/install.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;; &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>raspberrypi</category>
      <category>k8</category>
      <category>oci</category>
    </item>
    <item>
      <title>K8s cluster with OCI free-tier and Raspberry Pi4 (part 2)</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Wed, 15 Feb 2023 19:30:34 +0000</pubDate>
      <link>https://dev.to/liviux/k8s-cluster-with-oci-free-tier-and-raspberry-pi4-part-2-1n8d</link>
      <guid>https://dev.to/liviux/k8s-cluster-with-oci-free-tier-and-raspberry-pi4-part-2-1n8d</guid>
      <description>&lt;p&gt;This long read is a multiple part tutorial for building a Kubernetes cluster (using k3s) with 4 x OCI free-tier ARM instances and 4 x Raspberry Pi 4. Plus some applications needed for installation (Terraform and Ansible) and a lot of things installed on the cluster.&lt;br&gt;
&lt;strong&gt;Part 2 is running a Kubernetes cluster on 4 Raspberry Pi 4 using Ansible.&lt;br&gt;
GitHub repository is &lt;a href="https://github.com/liviux/k8s-cluster-oci-rpi4" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;At least 2 Raspberry Pi. I've got 4 of them, 3 with 4GB and 1 with 8GB. Every one needs a SD Card, a power adapter plus network cables (plus an optional switch and 4 cases);&lt;/li&gt;
&lt;li&gt;And the same from part 1. Windows 11 with WSL2 running Ubuntu 20.04, but this will work on any Linux &amp;amp; Win machine and Terraform installed (tested with v1.3.7)- how to &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;here&lt;/a&gt;;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0edbmj9h7vt1ls8805nd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0edbmj9h7vt1ls8805nd.jpg" alt="my setup" width="686" height="875"&gt;&lt;/a&gt;My Pis&lt;/p&gt;
&lt;h2&gt;
  
  
  Preparing
&lt;/h2&gt;

&lt;p&gt;Installing an OS on Pi is very easy. Just insert the SD card in your PC and use Imager from &lt;a href="https://www.raspberrypi.com/software/" rel="noopener noreferrer"&gt;official website&lt;/a&gt;. I choose the same OS as in the OCI cluster, that is Ubuntu Server 22.04.1 64bit. In advanced settings (bottom right of Imager) pick &lt;em&gt;Set Hostname&lt;/em&gt; and write your own. I have rpi4-1 rpi4-2 rpi4-3 and rpi4-4. Pick &lt;em&gt;Enable SSH&lt;/em&gt; and &lt;em&gt;Set username and password&lt;/em&gt;, this way you can connect to Pi immediately, without a monitor and keyboard for it. Then hit &lt;em&gt;Write&lt;/em&gt; and repeat this step for every one.&lt;/p&gt;

&lt;p&gt;From my home router I found the devices IPs. Still there you can configure &lt;em&gt;Address reservation&lt;/em&gt; so every time they keep their IP. Did some Port forwarding too so I can access them from everywhere using DDNS from my ISP, as I don't have static IPv4. All of these settings are configured from you home router so Google how to if you're interested.&lt;/p&gt;

&lt;p&gt;I added every PI to local machine &lt;em&gt;C:\Windows\System32\drivers\etc\hosts&lt;/em&gt;  file to be able to control them easier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;192.168.0.201 rpi4-1
192.168.0.202 rpi4-2
192.168.0.203 rpi4-3
192.168.0.204 rpi4-4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can check if everything is ok by running some &lt;code&gt;ping rpi4-1&lt;/code&gt; or &lt;code&gt;ssh user@rpi4-1&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ansible
&lt;/h2&gt;

&lt;p&gt;For configuration management I picked Ansible as it is agentless and not so difficult (spoiler alert, it is though). We can control all RPI4 from local machine. Install Ansible first &lt;code&gt;sudo apt install ansible&lt;/code&gt;.&lt;br&gt;
Now from your PC run the following commands (assuming you already generated ssh keys from Part 1, using ssh-keygen):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-copy-id -i ~/.ssh/key.pub user@rpi4-1
ssh-copy-id -i ~/.ssh/key.pub user@rpi4-2
ssh-copy-id -i ~/.ssh/key.pub user@rpi4-3
ssh-copy-id -i ~/.ssh/key.pub user@rpi4-4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will allow Ansible to connect to every Pi without requesting the password every time. &lt;br&gt;
I had to uncomment with &lt;code&gt;sudo vi /etc/ansible/ansible.cfg&lt;/code&gt; the line with &lt;code&gt;private_key_file = ~/.ssh/key&lt;/code&gt;. And in /etc/ansible/hosts i added the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[big]
rpi4-1  ansible_connection=ssh

[small]
rpi4-2  ansible_connection=ssh
rpi4-3  ansible_connection=ssh
rpi4-4  ansible_connection=ssh

[home:children]
big
small
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will add the big rpi4 in a biggroup and the rest of workers to an small group. Plus a home group having them all. My PC user and RPIs user is the same, but if you have a different one you have to add this to the same file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[all:vars]
remote_user = user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now test if everything is ok with &lt;code&gt;ansible home -m ping&lt;/code&gt;. Green is ok.&lt;br&gt;
I like to keep all my systems updated to latest version, especially this one used for testing. So we'll need to create a new file &lt;em&gt;update.yml&lt;/em&gt; and paste below block in it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: home
  tasks:
    - name: Update apt repo and cache
      apt: update_cache=yes force_apt_get=yes cache_valid_time=3600
    - name: Upgrade all packages
      apt: upgrade=yes force_apt_get=yes
    - name: Check if a reboot is needed
      register: reboot_required_file
      stat: path=/var/run/reboot-required get_md5=no
    - name: Reboot the box if kernel updated
      reboot:
        msg: "Reboot initiated by Ansible for kernel updates"
        connect_timeout: 5
        reboot_timeout: 90
        pre_reboot_delay: 0
        post_reboot_delay: 30
        test_command: uptime
      when: reboot_required_file.stat.exists
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an Ansible playbook that updates Ubuntu and then reboots the PIs. Now save it and run &lt;code&gt;ansible-playbook update.yml -K -b&lt;/code&gt; which will ask for sudo password and run the playbook. It lasted ~10 minutes for me. On another shell you can ssh to any of the PIs and run &lt;code&gt;htop&lt;/code&gt;to see the activity.&lt;br&gt;
Now try &lt;code&gt;ansible home -a "rpi-eeprom-update -a" -b -K&lt;/code&gt; to see if there's any firmware update for you Raspberry Pi 4.&lt;br&gt;
Next step is to enable cgroups on every PI. Create a new playbook append-cmd.yml and add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: home
  tasks:
  - name: Append cgroup to cmdline.txt
    lineinfile:
      path: /boot/firmware/cmdline.txt
      backrefs: yes
      regexp: "^(.*)$"
      line: '\1 cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1'
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will append to end of file &lt;em&gt;/boot/firmware/cmdline.txt&lt;/em&gt; the strings &lt;em&gt;cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1&lt;/em&gt;. Run it with &lt;code&gt;ansible-playbook append-cmd.yml -K -b&lt;/code&gt;.&lt;br&gt;
I won't add a graphical interface and won't use Wi-Fi and Bluetooth so we can steal some memory from GPU memory, Wi-Fi and BT to be available to the Kubernetes cluster. So we need to add a few lines to &lt;em&gt;/boot/firmware/config.txt&lt;/em&gt; using a new Ansible playbook append-cfg.yml with following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: home
  tasks:
  - name: Append new lines to config.txt
    blockinfile:
      path: /boot/firmware/config.txt
      block: |
       gpu_mem=16
       dtoverlay=disable-bt
       dtoverlay=disable-wifi
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it again with &lt;code&gt;ansible-playbook append-cfg.yml -K -b&lt;/code&gt;.&lt;br&gt;
Next 2 mods must be enabled with command &lt;code&gt;ansible home -a "modprobe overlay" -a "modprobe br_netfilter" -K -b&lt;/code&gt;.&lt;br&gt;
Next playbook will create 2 files and add some lines to it. Let's call it &lt;em&gt;iptable.yml&lt;/em&gt; and add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: home
  tasks:
  - name: Create a file and write in it 1.
    blockinfile:
      path: /etc/modules-load.d/k8s.conf
      block: |
        overlay
        br_netfilter
      create: yes
  - name: Create a file and write in it 2.
    blockinfile:
      path: /etc/sysctl.d/k8s.conf
      block: |
        net.bridge.bridge-nf-call-ip6tables = 1
        net.bridge.bridge-nf-call-iptables = 1
        net.ipv4.ip_forward = 1
      create: yes
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with &lt;code&gt;ansible-playbook iptable.yml -K -b&lt;/code&gt;. After that run &lt;code&gt;ansible home -a "sysctl --system" -K -b&lt;/code&gt;. My PIs started to be laggy so i rebooted here with &lt;code&gt;ansible home -a "reboot" -K -b&lt;/code&gt;.&lt;br&gt;
Now the last step. Installing some apps on PIs. I'm still unsure if this is needed, but I'm pretty sure it won't harm the cluster. Create a new file &lt;em&gt;install.yml&lt;/em&gt; and add copy this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: home
  tasks:
  - name: Install some packages
    apt:
      name:
        - curl
        - gnupg2
        - software-properties-common
        - apt-transport-https
        - ca-certificates
        - linux-modules-extra-raspi
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run it with &lt;code&gt;ansible-playbook install.yml -K -b&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That's all folks. For this part. If you are here only for the Raspberry part now you should install k3s very easy, just follow the steps from &lt;a href="https://docs.k3s.io/quick-start" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;One of my old k3s install on Raspbbery PI article on LinkedIn - &lt;a href="https://www.linkedin.com/pulse/creating-arm-kubernetes-cluster-raspberry-pi-oracle-liviu-alexandru" rel="noopener noreferrer"&gt;here&lt;/a&gt; - inspired from &lt;a href="https://braindose.blog/2021/12/31/install-kubernetes-raspberry-pi/" rel="noopener noreferrer"&gt;braindose.blog&lt;/a&gt;;&lt;br&gt;
Official Ansible documentation - &lt;a href="https://docs.ansible.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;;&lt;br&gt;
ChatGPT helped me a lot of time, use it &lt;a href="https://chat.openai.com/chat" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>education</category>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>K8s cluster with OCI free-tier and Raspberry Pi4 (part 1)</title>
      <dc:creator>Ștefănescu Liviu</dc:creator>
      <pubDate>Wed, 15 Feb 2023 19:30:09 +0000</pubDate>
      <link>https://dev.to/liviux/k8s-cluster-with-oci-free-tier-and-raspberry-pi4-part-1-28k0</link>
      <guid>https://dev.to/liviux/k8s-cluster-with-oci-free-tier-and-raspberry-pi4-part-1-28k0</guid>
      <description>&lt;p&gt;This long read is a multiple part tutorial for building a Kubernetes cluster (using k3s) with 4 x OCI free-tier ARM instances and 4 x Raspberry Pi 4. Plus some applications needed for installation (Terraform and Ansible) and a lot of things installed on the cluster.&lt;br&gt;
&lt;strong&gt;Part 1 is running a Kubernetes cluster on OCI free-tier resources using Terraform.&lt;br&gt;
GitHub repository is &lt;a href="https://github.com/liviux/k8s-cluster-oci-rpi4" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Obvious, an OCI account, get it from here - &lt;a href="https://www.oracle.com/cloud/" rel="noopener noreferrer"&gt;oracle.com/cloud&lt;/a&gt;. If you already have an account, be careful not to have any resources provisioned already (even for other users, or compartments), this tutorial will use all free-tier ones. Also be extra careful to pick a region not so popular, as it may have no resources available. Pick a region with enough ARM instances available. If during final steps, terraform is stuck, you can check in &lt;em&gt;OCI &amp;gt; Compute &amp;gt; Instance Pools&lt;/em&gt; &amp;gt; select your own &amp;gt; &lt;em&gt;Work requests&lt;/em&gt; , if there is &lt;em&gt;Failure _and in that log file there's an error _Out of host capacity&lt;/em&gt;, then you must wait, even days until resources are freed. You can run a script from &lt;a href="https://github.com/hitrov/oci-arm-host-capacity" rel="noopener noreferrer"&gt;here&lt;/a&gt; which will try to create instances until there's something available. When that happens, go fast to your OCI, delete all that was created and then run the terraform scripts;&lt;/li&gt;
&lt;li&gt;I used Windows 11 with WSL2 running Ubuntu 20.04, but this will work on any Linux machine;&lt;/li&gt;
&lt;li&gt;Terraform installed (tested with v1.3.7 - and OCI provider v4.105)- how to &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;here&lt;/a&gt;;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Preparing
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;(following official guidelines from &lt;a href="https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/tf-provider/01-summary.htm" rel="noopener noreferrer"&gt;Oracle&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For safety we should use a separate compartment and user for our OCI configuration, not root ones. Now is a good time to create a new notes file and add some values there, you will need them later. Mostly you will add 3 things for each value (user + user_name_you_created + it's_OCID; group  ... etc.).&lt;br&gt;
Go to &lt;strong&gt;Identity &amp;amp; Security &amp;gt; Compartments&lt;/strong&gt; and click on &lt;strong&gt;Create Compartment&lt;/strong&gt;. Open it and copy the &lt;strong&gt;OCID&lt;/strong&gt; to your notes file. Then in &lt;strong&gt;Identity &amp;amp; Security &amp;gt; Users&lt;/strong&gt; click on &lt;strong&gt;Create User&lt;/strong&gt;. Open it and copy the &lt;strong&gt;OCID&lt;/strong&gt; to your notes file.&lt;br&gt;
Then in &lt;strong&gt;Identity &amp;amp; Security &amp;gt; Groups&lt;/strong&gt; click on &lt;strong&gt;Create Group&lt;/strong&gt;. The same as above with the &lt;strong&gt;OCID&lt;/strong&gt;. Here click on &lt;strong&gt;Add User to Group&lt;/strong&gt; and add the newly created user.&lt;br&gt;
In &lt;strong&gt;Identity &amp;amp; Security &amp;gt; Policies&lt;/strong&gt; click on &lt;strong&gt;Create Policy&lt;/strong&gt;, &lt;strong&gt;Show manual editor&lt;/strong&gt; and add the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;allow group group_you_created to read all-resources in &amp;lt;compartment compartment_you_created&amp;gt;
allow group group_you_created to manage virtual-network-family  in compartment &amp;lt;compartment_you_created&amp;gt;
allow group group_you_created to manage instance-family  in compartment &amp;lt;compartment_you_created&amp;gt;
allow group group_you_created to manage compute-management-family  in compartment &amp;lt;compartment_you_created&amp;gt;
allow group group_you_created to manage volume-family  in compartment &amp;lt;compartment_you_created&amp;gt;
allow group group_you_created to manage load-balancers  in compartment &amp;lt;compartment_you_created&amp;gt;
allow group group_you_created to manage network-load-balancers  in compartment &amp;lt;compartment_you_created&amp;gt;
allow group group_you_created to manage dynamic-groups in compartment &amp;lt;compartment_you_created&amp;gt;
allow group group_you_created to manage policies in compartment &amp;lt;compartment_you_created&amp;gt;
allow group group_you_created to manage dynamic-groups in tenancy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you need access to OCI from your machine. So, create a new folder in HOME directory&lt;br&gt;
&lt;code&gt;mkdir ~/.oci&lt;/code&gt;&lt;br&gt;
Generate a private key there&lt;br&gt;
&lt;code&gt;openssl genrsa -out ~/.oci/key.pem 2048&lt;/code&gt;&lt;br&gt;
Change permissions for it&lt;br&gt;
&lt;code&gt;chmod 600 ~/.oci/key.pem&lt;/code&gt;&lt;br&gt;
Then generate you're public key&lt;br&gt;
&lt;code&gt;openssl rsa -pubout -in ~/.oci/key.pem -out $HOME/.oci/key_public.pem&lt;/code&gt;&lt;br&gt;
And then copy that public key, everything inside that file &lt;br&gt;
&lt;code&gt;cat ~/.oci/key_public.pem&lt;/code&gt;&lt;br&gt;
This key has to be added to your OCI new user. Go to &lt;strong&gt;OCI &amp;gt; Identity &amp;amp; Security &amp;gt; Users&lt;/strong&gt; select the new user and open &lt;strong&gt;API Keys&lt;/strong&gt; , click on &lt;strong&gt;Add API Key&lt;/strong&gt;, select &lt;strong&gt;Paste Public Key&lt;/strong&gt;, and there paste all your copied key.&lt;br&gt;
After you've done that you need to copy to notes the fingerprint too. Save the path to the private key too.&lt;br&gt;
&lt;em&gt;*note: Use ~ and not $HOME, that's the only way it worked for me.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To your notes file copy the following too:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tenancy &lt;strong&gt;OCID&lt;/strong&gt;. Click on your avatar (from top-right), and select &lt;strong&gt;Tenancy&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Region. In the top right there is the name of your region too. Now find it &lt;a href="https://docs.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm" rel="noopener noreferrer"&gt;here&lt;/a&gt; and copy it's identifier (ex. eu-paris-1).&lt;/li&gt;
&lt;li&gt;The path to the private key. In our case - &lt;code&gt;~/.oci/key.pem&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now create a new folder to test if terraform is ok and linked with OCI. In that folder create a file &lt;strong&gt;main.tf&lt;/strong&gt; and add this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    oci = {
      source  = "oracle/oci"
      version = "4.105.0"
    }
  }
}


# Configure the OCI provider with an API Key

provider "oci" {
  tenancy_ocid     = "ocid1.tenancy.oc1..aaaaaaYOURTENANCY3uzx4a"
  user_ocid        = "ocid1.user.oc1..aaaaaaYOURUSER4s5ga"
  private_key_path = "~/.oci/oci.pem"
  fingerprint      = "2a:d8:YOURFINGERPRINT:a1:cd:06"
  region           = "eu-YOURREGION"
}

#Get a list of Availability Domains
data "oci_identity_availability_domains" "ads" {
  compartment_id = "ocid1.tenancy.oc1..aaaaaaYOURTENANCYuzx4a"
}

#Output the result
output "all-availability-domains-in-your-compartment" {
  value = data.oci_identity_availability_domains.ads.availability_domains
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you should run &lt;code&gt;terraform init&lt;/code&gt; to download the OCI provider and then &lt;code&gt;terraform plan&lt;/code&gt; to see what will happen and &lt;code&gt;terraform apply&lt;/code&gt; to receive final results. This small demo configuration file from above should return the name of the availability domains in that regions. If you receive something like &lt;strong&gt;"name" = "pmkj:EU-YOURREGION-1-AD-1"&lt;/strong&gt; and no errors then everything is ok until now. This file can be deleted now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning
&lt;/h2&gt;

&lt;p&gt;You will need to add some new values to the notes file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From inside OCI, on top right corner, click on &lt;strong&gt;Developer Tools &amp;gt; Cloud Shell&lt;/strong&gt; and there write &lt;code&gt;oci iam availability-domain list&lt;/code&gt;. Save the name, not id (if more than 1, pick one). This is your &lt;em&gt;availability_domain&lt;/em&gt; variable;&lt;/li&gt;
&lt;li&gt;Again from this console type &lt;code&gt;oci compute image list --compartment-id &amp;lt;YOUR-COMPARTMENT&amp;gt; --operating-system "Canonical Ubuntu" --operating-system-version "22.04 Minimal aarch64" --shape "VM.Standard.A1.Flex"&lt;/code&gt;, to find the OS Image ID. Probably the first result has the latest build, in my case was &lt;em&gt;Canonical-Ubuntu-22.04-Minimal-aarch64-2022.11.05-0&lt;/em&gt;. From here save the id. This is your &lt;em&gt;os_image_id&lt;/em&gt; variable;&lt;/li&gt;
&lt;li&gt;Now just google "&lt;em&gt;my ip&lt;/em&gt;" and you will find your Public IP. Save it in CIDR format, ex. &lt;em&gt;111.222.111.99/32&lt;/em&gt;. This is your &lt;em&gt;my_public_ip_cidr&lt;/em&gt; variable. I use a cheap VPS just to have a static IP. If you don't have a static IPv4 from your ISP, I don't know a quick solution for you, maybe someone can comment one. You can setup DDNS, but that can't be used in Security List afaik. Only solution every time your IP changes, go to &lt;strong&gt;VCN &amp;gt;  Security List&lt;/strong&gt; and modify the &lt;strong&gt;Ingress rule&lt;/strong&gt; with the new IP;&lt;/li&gt;
&lt;li&gt;Your &lt;em&gt;public_key_path&lt;/em&gt; is your public SSH keys. If you don't have any, quickly generate them with &lt;code&gt;ssh-keygen&lt;/code&gt;. You should have one now in &lt;em&gt;~/.ssh/key.pub&lt;/em&gt; (I copied the private key, using &lt;code&gt;scp&lt;/code&gt; to the VPS, so I can connect to OCI from local machine and from VPS);&lt;/li&gt;
&lt;li&gt;Last is your email address that will be used to install a certification manager. That will be your &lt;em&gt;certmanager_email_address&lt;/em&gt; variable. I didn't setup one, as this is just a personal project for testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you've cloned the repo, go to oci/terraform.tfvars and edit all values with the ones from your notes file. &lt;br&gt;
This build uses the great terraform configuration files from this repo of &lt;a href="https://github.com/garutilorenzo/k3s-oci-cluster" rel="noopener noreferrer"&gt;garutilorenzo&lt;/a&gt; (using version 2.2; if you have errors running all of this, you should check what changed in this repo since v2.2, or 01.02.23). You can read &lt;a href="https://github.com/garutilorenzo/k3s-oci-cluster#pre-flight-checklist" rel="noopener noreferrer"&gt;here&lt;/a&gt; if you want to customize your configuration and edit the &lt;em&gt;main.tf&lt;/em&gt; file. This is the diagram that garutilorenzo made and how your deployment will look like (this tutorial is without Longhorn and ArgoCD, with 1 server nodes + 3 worker nodes and with ingress controller set as Traefik):&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj55a7depvbo0yz0n03tp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj55a7depvbo0yz0n03tp.png" alt="diagram" width="791" height="1129"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;*note&lt;/em&gt; - I've got some problems with clock of WSL2 not being synced to Windows clock. And provisioning didn't worked so if you receive clock errors too, verify your time with &lt;code&gt;date&lt;/code&gt;command, if out of sync just run &lt;code&gt;sudo hwclock -s&lt;/code&gt; or &lt;code&gt;sudo ntpdate time.windows.com&lt;/code&gt;.&lt;br&gt;
Now just run &lt;code&gt;terraform plan&lt;/code&gt; and then &lt;code&gt;terraform apply&lt;/code&gt;. If everything was ok you should have your resources created.&lt;/p&gt;

&lt;p&gt;When the script finishes save the outputs (or you can find the values in OCI):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Outputs:
k3s_servers_ips = [
  "152.x.x.115",
]
k3s_workers_ips = [
  "140.x.x.158",
  "140.x.x.226",
]
public_lb_ip = tolist([
  {
    "ip_address" = "140.x.x.159"
    "ip_version" = "IPV4"
    "is_public" = true
    "reserved_ip" = tolist([])
  },
  {
    "ip_address" = "10.0.1.96"
    "ip_version" = "IPV4"
    "is_public" = false
    "reserved_ip" = tolist([])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can connect to any worker or server IP using &lt;code&gt;ssh -i ~/.ssh/key ubuntu@152.x.x.115&lt;/code&gt;. Connect to server IP and write &lt;code&gt;sudo kubectl get nodes&lt;/code&gt; to check all nodes.&lt;/p&gt;

&lt;p&gt;That's all for now.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;Official OCI provider documentation from Terraform - &lt;a href="https://registry.terraform.io/providers/oracle/oci/latest/docs" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;br&gt;
Official OCI Oracle documentation with Tutorials - &lt;a href="https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/tf-provider/01-summary.htm" rel="noopener noreferrer"&gt;here&lt;/a&gt; and Guides - &lt;a href="https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraform.htm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
Great GitHub repo of garutilorenzo - &lt;a href="https://github.com/garutilorenzo/k3s-oci-cluster" rel="noopener noreferrer"&gt;here&lt;/a&gt;. There are a few others who can help you with k8s on OCI too &lt;a href="https://arnoldgalovics.com/free-kubernetes-oracle-cloud/" rel="noopener noreferrer"&gt;1&lt;/a&gt; with &lt;a href="https://github.com/galovics/free-kubernetes-oracle-cloud-terraform" rel="noopener noreferrer"&gt;repo&lt;/a&gt;, &lt;a href="https://github.com/r0b2g1t/k3s-cluster-on-oracle-cloud-infrastructure" rel="noopener noreferrer"&gt;2&lt;/a&gt;, &lt;a href="https://github.com/solamarpreet/kubernetes-on-oci" rel="noopener noreferrer"&gt;3&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>softwaredevelopment</category>
      <category>productivity</category>
      <category>career</category>
    </item>
  </channel>
</rss>
