<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sheina_techiue</title>
    <description>The latest articles on DEV Community by Sheina_techiue (@nancy_muriithi).</description>
    <link>https://dev.to/nancy_muriithi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nancy_muriithi"/>
    <language>en</language>
    <item>
      <title>What is kubernetes?</title>
      <dc:creator>Sheina_techiue</dc:creator>
      <pubDate>Thu, 01 Sep 2022 10:58:25 +0000</pubDate>
      <link>https://dev.to/nancy_muriithi/what-is-kubernetes-bk9</link>
      <guid>https://dev.to/nancy_muriithi/what-is-kubernetes-bk9</guid>
      <description>&lt;p&gt;** So What is Kubernetes?**&lt;/p&gt;

&lt;p&gt;Modern software is increasingly run as fleets of containers, sometimes called microservices. A complete application may comprise many containers, all needing to work together in specific ways. Kubernetes is software that turns a collection of physical or virtual hosts (servers) into a platform that:&lt;/p&gt;

&lt;p&gt;Hosts containerized workloads, providing them with compute, storage, and network resources, and&lt;br&gt;
Automatically manages large numbers of containerized applications — keeping them healthy and available by adapting to changes and challenges&lt;br&gt;
Kubernetes is a powerful open-source orchestration tool, designed to help you manage microservices and containerized applications across a distributed cluster of computing nodes. Kubernetes aims to hide the complexity of managing containers through the use of several key capabilities, such as REST APIs and declarative templates that can manage the entire lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does Kubernetes work?&lt;/strong&gt;&lt;br&gt;
When developers create a multi-container application, they plan out how all the parts fit and work together, how many of each component should run, and roughly what should happen when challenges (e.g., lots of users logging in at once) are encountered.&lt;br&gt;
They store their containerized application components in a container registry (local or remote) and capture this thinking in one or several text files comprising configuration. To start the application, they “apply” the configuration to Kubernetes.&lt;br&gt;
Kubernetes job is to evaluate and implement this configuration and maintain it until told otherwise. It:&lt;br&gt;
Analyzes the configuration, aligning its requirements with those of all the other application configurations running on the system&lt;br&gt;
Finds resources appropriate for running the new containers (e.g., some containers might need resources like GPUs that aren’t present on every host)&lt;br&gt;
Grabs container images from the registry, starts up the new containers, and helps them connect to one another and to system resources (e.g., persistent storage), so the application works as a whole&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Then Kubernetes monitors everything, and when real events diverge from desired states, Kubernetes tries to fix things and adapt. For example, if a container crashes, Kubernetes restarts it. If an underlying server fails, Kubernetes finds resources elsewhere to run the containers that node was hosting. If traffic to an application suddenly spikes, Kubernetes can scale out containers to handle the additional load, in conformance to rules and limits stated in the configuration.
**
Why use Kubernetes?**
One of the benefits of Kubernetes is that it makes building and running complex applications much simpler. Here’s a handful of the many Kubernetes features:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Standard services like local DNS and basic load-balancing that most applications need, and are easy to use.&lt;br&gt;
Standard behaviors (e.g., restart this container if it dies) that are easy to invoke, and do most of the work of keeping applications running, available, and performant.&lt;br&gt;
A standard set of abstract “objects” (called things like “pods,” “replicasets,” and “deployments”) that wrap around containers and make it easy to build configurations around collections of containers.&lt;br&gt;
A standard API that applications can call to easily enable more sophisticated behaviors, making it much easier to create applications that manage other applications.&lt;br&gt;
The simple answer to “what is Kubernetes used for” is that it saves developers and operators a great deal of time and effort, and lets them focus on building features for their applications, instead of figuring out and implementing ways to keep their applications running well, at scale.&lt;/p&gt;

&lt;p&gt;By keeping applications running despite challenges (e.g., failed servers, crashed containers, traffic spikes, etc.) Kubernetes also reduces business impacts, reduces the need for fire drills to bring broken applications back online, and protects against other liabilities, like the costs of failing to comply with Service Level Agreements (SLAs).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where can I run Kubernetes?&lt;/strong&gt;&lt;br&gt;
Kubernetes also runs almost anywhere, on a wide range of Linux operating systems (worker nodes can also run on Windows Server). A single Kubernetes cluster can span hundreds of bare-metal or virtual machines in a datacenter, private, or any public cloud. Kubernetes can also run on developer desktops, edge servers, microservers like Raspberry Pis, or very small mobile and IoT devices and appliances.&lt;/p&gt;

&lt;p&gt;With some forethought (and the right product and architectural choices) Kubernetes can even provide a functionally-consistent platform across all these infrastructures. This means that applications and configurations composed and initially tested on a desktop Kubernetes can move seamlessly and quickly to more-formal testing, large-scale production, edge, or IoT deployments. In principle, this means that enterprises and organizations can build “hybrid” and “multi-clouds” across a range of platforms, quickly and economically solving capacity problems without lock-in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Kubernetes cluster?&lt;/strong&gt;&lt;br&gt;
The K8s architecture is straightforward. Only the control plane, which exposes an API and is in charge of scheduling and replicating groups of containers known as Pods, interacts directly with the nodes hosting your application. Kubectl is a command-line interface for interacting with the API to exchange desired application states or obtain detailed information on the present state of the infrastructure.&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;Kubernetes Control Plane Components&lt;/strong&gt;&lt;br&gt;
Below are the main components found on the control plane node:&lt;/p&gt;

&lt;p&gt;etcd server&lt;/p&gt;

&lt;p&gt;A simple, distributed key-value store which is used to store the Kubernetes cluster data (such as the number of pods, their state, namespace, etc.), API objects, and service discovery details. It should only be accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;kube-apiserver&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes API server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets/controllers, and others), serving as a frontend to the cluster. Also, this is the only component that communicates with the etcd cluster, making sure data is stored in etcd and is in agreement with the service details of the deployed pods.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;kube-controller-manager&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Runs a number of distinct controller processes in the background (for example, the replication controller controls the number of replicas in a pod; the endpoints controller populates endpoint objects like services and pods) to regulate the shared state of the cluster and perform routine tasks.&lt;/p&gt;

&lt;p&gt;When a change in a service configuration occurs (for example, replacing the image from which the pods are running or changing parameters in the configuration YAML file), the controller spots the change and starts working towards the new desired state.&lt;br&gt;
_&lt;br&gt;
Cloud-controller-manage_r&lt;/p&gt;

&lt;p&gt;Responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, when a controller needs to check if a node was terminated or set up routes, load balancers or volumes in the cloud infrastructure, all that is handled by the cloud-controller-manager.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;kube-scheduler&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node.&lt;/p&gt;

&lt;p&gt;For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;kubectl&lt;/em&gt;&lt;br&gt;
kubectl is a command-line tool that interacts with kube-apiserver and sends commands to the master node. Each command is converted into an API call.&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Kubernetes Nodes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A node is a Kubernetes worker machine managed by the control plane, which can run one or more pods. The Kubernetes control plane automatically handles the scheduling of pods between nodes in the cluster. Automatic scheduling in the control plane takes into account the resources available on each node, and other constraints, such as affinity and taints, which define the desired running environment for different types of pods.&lt;/p&gt;

&lt;p&gt;Below are the main components found on a Kubernetes worker node:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;kubelet&lt;/em&gt; — the main service on a node, which manages the container runtime (such as containerd or CRI-O). The kubelet regularly takes in new or modified pod specifications (primarily through the kube-apiserver) and ensures that pods and their containers are healthy and running in the desired state. This component also reports to the master on the health of the host where it is running.&lt;br&gt;
_kube-proxy _— a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.&lt;br&gt;
&lt;strong&gt;3.Kubernetes Pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A pod is the smallest unit of management in a Kubernetes cluster. It represents one or more containers that constitute a functional component of an application. Pods encapsulate containers, storage resources, unique network IDs, and other configurations defining how containers should run.&lt;/p&gt;

&lt;p&gt;Docker is not really a Kubernetes alternative, but newcomers to the space often ask what is the difference between them. The primary difference is that Docker is a container runtime, while Kubernetes is a platform for running and managing containers across multiple container runtimes.&lt;/p&gt;

&lt;p&gt;Docker is one of many container runtimes supported by Kubernetes. You can think of Kubernetes as an “operating system” and Docker containers as one type of application that can run on the operating system.&lt;/p&gt;

&lt;p&gt;Docker is hugely popular and was a major driver for the adoption of containerized architecture. Docker solved the classic “works on my computer” problem, and is extremely useful for developers, but is not sufficient to manage large-scale containerized applications.&lt;/p&gt;

&lt;p&gt;If you need to handle the deployment of a large number of containers, networking, security, and resource provisioning become important concerns. Standalone Docker was not designed to address these concerns, and this is where Kubernetes comes in.&lt;/p&gt;

&lt;p&gt;The primary strength of Kubernetes is its modularity and generality. Nearly every kind of application that you might want to deploy can fit within Kubernetes, and no matter what kind of adjustments or tuning you need to make to your system, they’re generally possible.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Infrastructure as Code - Everything You Need to Know</title>
      <dc:creator>Sheina_techiue</dc:creator>
      <pubDate>Wed, 02 Feb 2022 10:34:11 +0000</pubDate>
      <link>https://dev.to/gitguardian/infrastructure-as-code-everything-you-need-to-know-4iom</link>
      <guid>https://dev.to/gitguardian/infrastructure-as-code-everything-you-need-to-know-4iom</guid>
      <description>&lt;p&gt;Infrastructure is one of the core tenets of a software development process—it is directly responsible for the stable operation of a software application. This infrastructure can range from servers, load balancers, firewalls, and databases all the way to complex container clusters.&lt;/p&gt;

&lt;p&gt;Infrastructure considerations are valid beyond production environments, as they spread across the entire development process. They include tools and platforms such as CI/CD platforms, staging environments, and testing tools. These infrastructure considerations increase as the level of complexity of the software product increases. &lt;strong&gt;Very quickly, the traditional approach for manually managing infrastructure becomes an unscalable solution to meet the demands of DevOps modern rapid software development cycles.&lt;/strong&gt; And that’s how Infrastructure as Code (IaC) has become the de facto solution in development today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Infrastructure as Code?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code is the process of provisioning and managing infrastructure defined through code, instead of doing so with a manual process.&lt;/strong&gt; IaC takes away the majority of provisioning work from developers, who can execute a script to have their infrastructure ready to go. That way, application deployments aren’t held up waiting for the infrastructure, and sysadmins aren’t managing time-consuming manual processes.&lt;/p&gt;

&lt;p&gt;Here is a step-by-step explanation of how creating an IaC environment works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A developer defines the configuration parameters in a domain-specific language (DSL).&lt;/li&gt;
&lt;li&gt;The instruction files are sent to a master server, a management API, or a code repository.&lt;/li&gt;
&lt;li&gt;The IaC platform follows the developer’s instructions to create and configure the infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Infrastructure as Code, users don’t need to configure an environment every time they want to develop, test, or deploy software. All infrastructure parameters are saved in the form of files called manifests.&lt;/p&gt;

&lt;p&gt;As all code files, manifests are easy to reuse, edit, copy, and share. Manifests make building, testing, staging, and deploying infrastructure quicker and consistent.&lt;/p&gt;

&lt;p&gt;Developers codify the configuration files and store them in version control. If someone edits a file, pull requests and code review workflows can check the correctness of the changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code Best Practices
&lt;/h2&gt;

&lt;p&gt;Infrastructure automation implementation will require numerous changes and refactoring, thus making this process pretty much strenuous for your organization. If you want to avoid most of the constraints and make it less severe - follow infrastructure as code best practices below!&lt;/p&gt;

&lt;h3&gt;
  
  
  Utilize CI/CD and quality control for repository with your IaC
&lt;/h3&gt;

&lt;p&gt;This will help you to maintain good quality of the code and get fast feedback loops from your DevOps teammates or developers ( after the changes were applied ). Luckily, there are test frameworks like terratest for terraform that allow us to write the actual tests, the earlier you try to cover everything with it - the more you benefit from it and fewer unexpected problems will happen with infrastructure. For sure you can't cover the application errors here, but at least you can be more confident in your infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make your infrastructure as code modular
&lt;/h3&gt;

&lt;p&gt;Microservices architecture, where software is built by developing smaller, modular units of code that can be deployed independently of the rest of a product’s components, is a popular trend in the software development world.&lt;/p&gt;

&lt;p&gt;The same concept can be applied to IaC. You can break down your infrastructure into separate modules or stacks then combine them in an automated fashion.&lt;/p&gt;

&lt;p&gt;There are a few benefits to this approach:&lt;/p&gt;

&lt;p&gt;First, you can have greater control over who has access to which parts of your infrastructure code. For example, you may have junior engineers who aren’t familiar with or don’t have expertise in certain parts of your infrastructure configuration. By modularizing your infrastructure code, you can deny access to these components until the junior engineers get up to speed.&lt;/p&gt;

&lt;p&gt;Also, modular infrastructure naturally limits the number of changes that can be made to the configuration. Smaller changes make bugs easier to detect and allow your team to be more agile.&lt;br&gt;
And if you're using IaC to support a  microservices architecture, a configuration template should be used to ensure consistency as your infrastructure scales to become a large cluster of servers.  This will eventually be highly valuable for configuring the servers and specifying how they should interact.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuously test, integrate, and deploy
&lt;/h3&gt;

&lt;p&gt;Continuous testing, integration, and deployment processes are a great way to manage all the changes that may be made to your infrastructure code.&lt;/p&gt;

&lt;p&gt;Testing should be rigorously applied to your infrastructure configurations to ensure that there are no post-deployment issues. Depending on your needs, an array of tests should be performed. Automated tests can be set up to run each time a change is made to your configuration code.&lt;/p&gt;

&lt;p&gt;Security of your infrastructure should also be continuously monitored and tested. DevSecOps is an emerging practice where security professionals work alongside developers to continuously incorporate threat detection and security testing throughout the software development life cycle instead of just throwing it in at the end.&lt;br&gt;
By increasing collaboration between testing, security, and development, bugs and threats can be identified earlier in the development life cycle and thus minimized upon going live.&lt;/p&gt;

&lt;p&gt;With a sound continuous integration process, these configuration templates can be provisioned multiple times in different environments such as Dev, Test, and QA. Developers can then work within each of these environments with consistent infrastructure configurations. This continuous integration will mitigate the risk of errors being present that may be detrimental to your application when the infrastructure is deployed to production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maintain version control
&lt;/h3&gt;

&lt;p&gt;These configuration files will be version-controlled. Because all configuration details are written in code, any changes to the codebase can be managed, tracked, and reconciled.&lt;/p&gt;

&lt;p&gt;Just like with application code, source control tools like Git, Mercurial, Subversion, or others should be used to maintain versions of your IaC codebase. Not only will this provide an audit trail for code changes, it will also provide the ability to collaborate, peer-review, and test IaC code before it goes live.&lt;/p&gt;

&lt;p&gt;Code branching and merging best practices should also be used to further increase developer collaboration and ensure that updates to your IaC code are properly managed.&lt;/p&gt;

&lt;p&gt;I highly recommend that if you are just getting started with IaC, do not try to automate everything at the outset. The reason for this is a high pace of changes. Once your platform becomes more or less stable, you will be able to automate its provisioning and maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  IaC tools
&lt;/h2&gt;

&lt;p&gt;As organizations are aggressively embracing the IaC revolution, the market is getting flooded with infrastructure as code tools. So, choosing the right cloud infrastructure automation tool for your organization is the key.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;One of the most embraced tools is Terraform that is extensively explained in this article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ansible
&lt;/h3&gt;

&lt;p&gt;Another well-renowned tool in the DevOps world is &lt;strong&gt;Ansible&lt;/strong&gt;. It is a configuration management tool that lets you automate the provisioning of infrastructure. In the early days of network infrastructure, Linux servers dominated the network landscape. Ansible began providing infrastructure automation solutions to Linux environments but now has evolved to support Windows, IBM OSS, virtualization platforms, containers, etc.&lt;/p&gt;

&lt;p&gt;Ansible uses Python-based YAML syntax. It uses procedural style language to manage the infrastructure wherein step-by-step procedures for the desired state are coded. The tool stores the desired state configuration by mapping corresponding tasks with the defined group of hosts and stores them in ‘Plays’. A list of ‘plays’ is called a Playbook.&lt;/p&gt;

&lt;p&gt;Ansible uses push mode to deliver change instructions to nodes in the network and deployments are instantly done. The agentless master architecture makes it simple to install and use. When compared with Puppet and other CM tools, the Ansible community is smaller but offers good support. It best suits short-lived environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pulumi
&lt;/h3&gt;

&lt;p&gt;The third IaC tool is &lt;strong&gt;Pulumi&lt;/strong&gt;. It is a new multi-language and multi-cloud development platform founded in 2017 and is quickly evolving to become one of the best Infrastructure as Code tools. Pulumi is more or less similar to Terraform, allowing you to create and deploy infrastructure as code to every type of cloud environment. It is open-source and free to use and is available on Github.&lt;/p&gt;

&lt;p&gt;To express the desired state, you can use general-purpose languages such as Go, JavaScript, C#, TypeScript, Python, etc. As such, you don’t have to create new modules but simply use existing ones. It is quickly getting popular because it allows you to write applications in your desired language and manage the infrastructure implementing DevOps best practices.&lt;/p&gt;

&lt;p&gt;Pulumi supports all cloud providers such as AWS, Azure, GCP as well as other cloud services and is highly flexible. However, it is not easy to use but has a short learning curve. It offers deep support for Kubernetes. With reusable components and stacks, it makes your infrastructure management job simpler.&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudFormation
&lt;/h3&gt;

&lt;p&gt;The fourth tool is &lt;strong&gt;AWS CloudFormation&lt;/strong&gt;. It is a popular cloud infrastructure automation tool coming from the IaaS giant AWS. It enables organizations to easily create, deploy and manage the AWS resource stack using a template or a text file that acts as a single source of truth.&lt;/p&gt;

&lt;p&gt;CloudFormation uses YAML or JSON. As it runs on the AWS infrastructure, you don’t have to worry about how it stores the infrastructure configuration. Templates are used to customize AWS stack, replicate and deploy apps in multiple environments.&lt;/p&gt;

&lt;p&gt;Change Sets is an important feature that enables you to check what changes before instantiating a template. Nested Stacks is another important feature that enables you to easily manage complex stacks by encapsulating functional logic, groups, databases, etc. in the template. It means you don’t have to compare and check old and new templates before making any changes.&lt;/p&gt;

&lt;p&gt;Coming from Amazon AWS, CloudFormation enjoys certain benefits. AWS keeps updating its features and services and CloudFormation gets these updates as well. Moreover, AWS keeps improving CloudFormation which means users will get the latest features and best services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code Challenges
&lt;/h2&gt;

&lt;p&gt;The IaC benefits are numerous, but this model does have certain challenges that need to be addressed so you can properly tackle them prior to the implementation process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration drift
&lt;/h3&gt;

&lt;p&gt;Regardless of how consistently or frequently you configure your servers, drifts in configuration may occur in the long run. This is why you should make sure there is no outside interference after you’ve established your IaC workflow. Every time you need to modify your infrastructure, you must ensure it is done according to your pre-established maintenance workflow. We refer to this principle as infrastructure immutability - the idea that your infrastructure should stay exactly as specified and that if a change is required, then a whole new set is provisioned and completely replaces the outdated one.&lt;/p&gt;

&lt;p&gt;Should you still end up making some non-uniform changes to a similar system group, some of them will inevitably be different from the others, potentially resulting in a shift in configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Potential duplication of errors
&lt;/h3&gt;

&lt;p&gt;Though the IaC implementation process and machine creation rely heavily on automation, there are still certain segments of the entire process that need to be done manually. Writing the parent code is one of those processes, and – naturally – there’s always potential for error when there’s human work involved. Even within an environment where QA checks are regular and consistent.&lt;/p&gt;

&lt;p&gt;These errors could occur in multiple machines as a side-effect of automation and can potentially represent as many security breaches. Remember that almost every cloud vulnerability comes from a misconfiguration. To make sure you are always on the safe side, we highly recommend double-checking the code that generates your IaC architecture. This can be done through strict and extremely consistent testing and thorough auditing processes. However, these additional efforts often lead to increased overheads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Infrastructure as Code is slowly but surely becoming the norm for organizations that seek automation and faster delivery. Developing applications at a faster rate is only possible through a streamlined workflow and an improved development environment.&lt;/p&gt;

&lt;p&gt;However, coming up with optimal IaC solutions for your unique IT architecture isn’t something that should be approached lightly, with insufficient resources, or the lack of guidance. But once you’ve set up your IaC environment the right way, your development process will immediately start yielding results.&lt;/p&gt;

&lt;p&gt;You can learn more about Infrastructure as code by visiting the links below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.bmc.com/blogs/infrastructure-as-code/"&gt;https://www.bmc.com/blogs/infrastructure-as-code/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://geekflare.com/infrastructure-as-code-intro/"&gt;https://geekflare.com/infrastructure-as-code-intro/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/community/conceptual_articles/infrastructure-as-code-explained"&gt;https://www.digitalocean.com/community/conceptual_articles/infrastructure-as-code-explained&lt;/a&gt;&lt;/p&gt;

</description>
      <category>iac</category>
      <category>cloud</category>
      <category>terraform</category>
      <category>pulumi</category>
    </item>
  </channel>
</rss>
