<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Regis Wilson</title>
    <description>The latest articles on DEV Community by Regis Wilson (@rwilsonrelease).</description>
    <link>https://dev.to/rwilsonrelease</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rwilsonrelease"/>
    <language>en</language>
    <item>
      <title>Ephemeral Environments: 9 Tips for Seamless Deployment</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Mon, 26 Feb 2024 17:59:42 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/ephemeral-environments-9-tips-for-seamless-deployment-4cpn</link>
      <guid>https://dev.to/rwilsonrelease/ephemeral-environments-9-tips-for-seamless-deployment-4cpn</guid>
      <description>&lt;p&gt;Ephemeral environments became a game-changer in modern software development. They are temporary, short-lived, and created as needed. These environments are perfect for specific tasks like testing new features or fixing bugs. Their main purpose is to give developers a safe space to try out and validate changes without affecting the main codebase or ongoing operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key benefits of ephemeral environments are:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Risk Reduction:&lt;/strong&gt; Isolating changes in temporary environments minimizes the potential for disruptions in the production environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Efficiency:&lt;/strong&gt; These on-demand environments require resources only when active, freeing up computational power and reducing costs when not in use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed and Flexibility:&lt;/strong&gt; On-demand creation allows for rapid testing cycles and quick pivots based on real-time results.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These advantages are just the beginning. As we explore further, we'll see how ephemeral environments not only improve development workflows but also align with broader goals like continuous integration and deployment, ultimately fostering a culture of innovation and efficiency. We will go over 9 areas you need to understand to successfully implement ephemeral environments in your organization. Let’s get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Understand the Key Characteristics of Ephemeral Environments
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments are catalysts in software development, closely mirroring production environments to provide a realistic testing ground for new features and updates. These dynamic setups are designed to be short-lived, with several key characteristics that make them a valuable asset for today’s development teams:&lt;/p&gt;

&lt;h3&gt;
  
  
  Resemblance to Production
&lt;/h3&gt;

&lt;p&gt;By closely emulating the production environment, ephemeral environments allow developers and testers to interact with applications under conditions that are nearly identical to the live production setup. This similarity ensures that any functionality, behaviors, or issues observed during testing will likely hold true after deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated Creation and Fast Provisioning
&lt;/h3&gt;

&lt;p&gt;Speed is of the essence in modern development workflows. Ephemeral environments thrive on automation for their creation and provisioning, which allows them to be spun up quickly as needed. This rapid availability is essential for maintaining their temporary nature while supporting continuous integration and delivery practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Replicated Data Consistency
&lt;/h3&gt;

&lt;p&gt;Data plays a crucial role in testing and validating application behavior. Ephemeral environments often include mechanisms for replicating data from production or using synthetic data sets that maintain consistency across test cases. This replication ensures that tests are not only relevant but also reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Accessibility via Unique URLs
&lt;/h3&gt;

&lt;p&gt;Stakeholders from developers to product managers require easy access to these environments. Unique URLs enable this accessibility, allowing for seamless sharing and review processes. Whether it's for internal reviews or external stakeholder demonstrations, these URLs provide direct entry points into the temporary world where the latest features reside.&lt;/p&gt;

&lt;p&gt;For teams looking to leverage on-demand ephemeral staging environments, exploring services like &lt;a href="https://release.com"&gt;Release&lt;/a&gt; can offer insight into how these environments streamline development and deployment processes.&lt;/p&gt;

&lt;p&gt;By understanding these foundational elements of ephemeral environments, organizations equip themselves with the tools necessary for efficient and effective software development cycles. Moving forward, embracing these characteristics can significantly transform how teams approach development challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Embrace the Benefits of Using Ephemeral Environments in Your Development Workflow
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments offer numerous benefits that can transform your development workflow. By embracing these advantages, you can streamline your development process, improve code quality, and foster a more collaborative working environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reducing Rework and Decrease Cycle Time
&lt;/h3&gt;

&lt;p&gt;One such advantage includes reducing rework, a key strategy to enhance productivity and minimize errors. Another advantage is getting results quickly up front during development before reaching production or staging. These environments provide an identical replica of your production environment, enabling developers to identify and fix issues prior to deployment. This process saves time, resources, and reduces the likelihood of recurring problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-Service Capabilities
&lt;/h3&gt;

&lt;p&gt;Developers often require access to different environments at various stages of their workflow. Ephemeral environments empower them with self-service capabilities on internal platforms, facilitating faster iterations. With automated creation and provisioning, developers can spin up as many environments as needed without waiting for manual provisioning or risking conflicts in shared spaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Production Workloads with Aligned Data
&lt;/h3&gt;

&lt;p&gt;Another significant benefit is the capacity to run production workloads with aligned data. This feature allows you to validate system behavior under realistic conditions, mitigating risks associated with deploying untested code into production. With data consistency ensured through mechanisms like replicated and scrubbed data, you can confidently assess how new features or changes will perform when actually deployed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Improving Collaboration
&lt;/h3&gt;

&lt;p&gt;Lastly, ephemeral environments play a vital role in improving collaboration and gathering early feedback from stakeholders. Through the use of automated preview environments that facilitate &lt;a href="https://release.com/blog/improve-developer-velocity-with-ephemeral-environments"&gt;measuring and improving developer velocity&lt;/a&gt;, stakeholders can easily access and review changes via unique URLs. This real-time collaboration fosters transparency, accelerates decision-making, and keeps everyone informed about development progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Leveraging Ephemeral Environments for Different Use Cases
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments have many practical uses in different situations, each with its own advantages. Here are two common examples:&lt;/p&gt;

&lt;h3&gt;
  
  
  Development and Testing of New Features
&lt;/h3&gt;

&lt;p&gt;Think of ephemeral environments as sandboxes that provide a controlled yet realistic setup. Developers can build features with confidence, knowing they are working in an environment that closely mirrors production conditions. This practice not only enhances code reliability but also minimizes surprises during the deployment phase.&lt;/p&gt;

&lt;p&gt;A perfect example of this is creating a new feature for an e-commerce site, like a personalized recommendation engine. An ephemeral environment allows developers to assess the impact of this feature in isolation from the rest of the application, ensuring it performs as expected when integrated into the larger system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Performance-Intensive or Distributed Applications
&lt;/h3&gt;

&lt;p&gt;This use case applies to applications that require significant computing resources or need to handle high volumes of data. Ephemeral environments excel in situations where you need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test how well your application scales under heavy load.&lt;/li&gt;
&lt;li&gt;Evaluate the performance of individual components or services.&lt;/li&gt;
&lt;li&gt;Validate the behavior of distributed systems.&lt;/li&gt;
&lt;li&gt;For instance, consider a microservices-based application that needs to scale up rapidly during peak traffic hours. In an ephemeral environment, you can simulate this scenario and assess how well your application scales under load, well before deploying it into production. Once the tests are completed, the whole environment can be torn down automatically to free up valuable resources, which could be quite expensive to build, maintain, or configure otherwise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, ephemeral environments offer flexibility and control while providing a realistic preview of production conditions. They are undoubtedly a powerful tool in any developer's toolbox.&lt;/p&gt;

&lt;p&gt;To delve deeper into ephemeral environments, check out Release's insightful article on &lt;a href="https://release.com/blog/beyond-k8s-introduction-to-ephemeral-environments"&gt;Beyond K8s: Introduction to Ephemeral Environments&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Integration Possibilities with Collaboration Tools like GitHub and Jira
&lt;/h2&gt;

&lt;p&gt;In the realm of software development, GitHub and Jira stand as titans of collaboration, offering robust platforms for code management and issue tracking, respectively. Ephemeral environments gain an added layer of efficiency when integrated with these tools, streamlining workflows and enhancing productivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Seamless Integration with GitHub
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated Environment Spin-up:&lt;/strong&gt; Upon a new pull request in GitHub, an ephemeral environment can be automatically created. This provides immediate feedback on how code changes will perform in a live setting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status Checks:&lt;/strong&gt; Integrating ephemeral environments into GitHub's status checks allows developers to see if their environment is ready for review directly from the pull request, ensuring that only fully provisioned environments are tested.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bot Notifications:&lt;/strong&gt; Custom bots can comment on pull requests with ephemeral environment URLs and deployment statuses, making it effortless for reviewers to access the latest version of the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Streamlining Workflows with Jira
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Linking Environments to Issues:&lt;/strong&gt; Attach ephemeral environment URLs to relevant Jira tickets. This encourages a clear association between task progress and the actual environment where the feature is implemented.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transition Automation:&lt;/strong&gt; Trigger the creation or teardown of ephemeral environments based on issue status transitions within Jira. For example, an environment can be spun up when an issue moves to "In Progress" and torn down once it reaches "Done."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By weaving ephemeral environments into the fabric of GitHub and Jira workflows, teams harness easy sharing capabilities that complement Agile practices. The result is a streamlined process where code merges and feature developments are transparently connected to dynamic testing environments, fostering an ecosystem where sharing becomes second nature to development processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Ensuring Quality in Ephemeral Environments through Effective Testing Strategies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Unit tests&lt;/strong&gt; are the backbone of software testing, but they often fall short in evaluating &lt;strong&gt;system behavior outside unit tests&lt;/strong&gt;. The complexity of modern applications necessitates comprehensive testing strategies that cover more ground. Enter &lt;strong&gt;smoke and integration tests&lt;/strong&gt; -- essential tools that probe the interactions between various components and ensure seamless deployments.&lt;/p&gt;

&lt;p&gt;When applied to &lt;strong&gt;live ephemeral environments&lt;/strong&gt;, these tests do more than just verify code correctness; they simulate real-world usage to expose issues that would otherwise remain hidden until production. This is crucial because while unit tests validate individual pieces, smoke and integration tests examine the assembled puzzle, catching errors that occur when all pieces work together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Strategies for Effective Testing in Ephemeral Environments:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Parallel Testing:&lt;/strong&gt; Managing multiple ephemeral environments allows teams to run concurrent tests for different features or branches, significantly reducing the time to release.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Test Suites:&lt;/strong&gt; By automating smoke and integration tests within ephemeral environments, developers can quickly identify defects early in the development cycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Resource Allocation:&lt;/strong&gt; Allocating resources on-the-fly to handle a large number of parallel environments ensures that testing is not bottlenecked by infrastructure limitations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Monitoring:&lt;/strong&gt; Integrating monitoring tools to track the health and performance of ephemeral environments during testing can provide immediate feedback on system stability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Incorporating these strategies into your development workflow can transform the quality assurance process. Teams become equipped to deliver robust software at a faster pace by leveraging the unique benefits of ephemeral environments for comprehensive testing. For insights into how this approach can increase developer velocity, consider exploring Release's whitepaper on &lt;a href="https://release.com/blog/increase-developer-velocity-by-removing-environment-bottlenecks"&gt;increasing developer velocity by removing environment bottlenecks using Environments as a Service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By ensuring thorough testing in environments that mimic production closely, software teams can confidently push new features, knowing they've been vetted in conditions that match what users will encounter.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Realizing the Agile Potential of Ephemeral Environments in Software Development
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments play a significant role in fostering Agile/Scrum practices within software development teams. With their dynamic and transient nature, they align perfectly with the iterative and adaptive nature of Agile methodologies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Supporting Continuous Delivery with Ephemeral Environments
&lt;/h3&gt;

&lt;p&gt;One of the key principles of Agile is continuous delivery, and ephemeral environments are instrumental in supporting this. They allow constant production-like testing and validation, enabling software updates to be developed, tested, and released rapidly and frequently. As such, developers can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test code changes immediately in a production-like environment.&lt;/li&gt;
&lt;li&gt;Detect and resolve issues early before they reach production.&lt;/li&gt;
&lt;li&gt;Accelerate the feedback loop with stakeholders for quicker iterations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, ephemeral environments serve as an enabler for continuous delivery – one of the cornerstones of Agile.&lt;/p&gt;

&lt;h3&gt;
  
  
  Facilitating Iterative Software Development with Ephemeral Environments
&lt;/h3&gt;

&lt;p&gt;Another attribute of Agile is its emphasis on iterative software development. Here, ephemeral environments shine by facilitating rapid iterations and feedback loops. For instance, developers can share unique URLs of these temporary environments with stakeholders to gather early feedback. The possibility to quickly set up, test, and tear down these environments aligns perfectly with the iterative cycles of Agile development.&lt;/p&gt;

&lt;p&gt;Incorporating ephemeral environments into an Agile workflow thus enhances efficiency while maintaining high quality standards – a win-win for any modern software development team.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. The DevOps Connection: Ephemeral Environments as a Catalyst for Collaboration and Efficiency
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments are a perfect fit for DevOps and Platform Engineering, where teams prioritize automation and collaboration. These dynamic setups are specifically designed to work within a DevOps or PE framework, &lt;a href="https://release.com/blog/extend-your-idp-with-environments-for-every-developer-and-every-change"&gt;bridging the gap between software development and IT operations&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Ephemeral Environments Benefit DevOps and Platform Engineering
&lt;/h3&gt;

&lt;p&gt;Here's how ephemeral environments contribute to the success of DevOps and PE:&lt;/p&gt;

&lt;h3&gt;
  
  
  Automation Aligned with DevOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Ephemeral environments automate the process of creating and tearing down environments, aligning with the DevOps principle of streamlining the software development pipeline.&lt;/li&gt;
&lt;li&gt;This automation reduces the manual effort required for environment setup, allowing teams to focus on more important tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Collaboration Across Teams for Platform Engineering
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Ephemeral environments can be spun up at any stage of the development process for various purposes, such as development or testing.&lt;/li&gt;
&lt;li&gt;This shared access promotes collaboration between different teams involved in the software lifecycle, breaking down silos and fostering a culture of teamwork. This platform allows a common place for all self-service environments to be tested, shared, and reviewed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Role of Ephemeral Environments in CI/CD Pipelines
&lt;/h3&gt;

&lt;p&gt;Integrating ephemeral environment provisioning into continuous integration (CI) and continuous delivery (CD) pipelines can revolutionize the deployment process. Here's how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A new ephemeral environment is automatically created by the CI/CD tool/platform whenever there's a code commit or pull request.&lt;/li&gt;
&lt;li&gt;Developers receive immediate feedback on their changes in an environment that closely resembles the production setup.&lt;/li&gt;
&lt;li&gt;The team can perform tests and quality assurance processes in real-time, ensuring that only thoroughly tested code moves forward in the pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach allows organizations to make the most out of their DevOps investment by speeding up deployment cycles while maintaining high standards of quality and collaboration.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Configurability for Rapid Application Development and Testing in Ephemeral Environments
&lt;/h2&gt;

&lt;p&gt;Rapid application development and &lt;a href="https://release.com/blog/test-environment-a-definition-and-how-to-guide"&gt;testing&lt;/a&gt; thrive on the ability to quickly adapt to different requirements and scenarios. Ephemeral environments extend this flexibility with their inherently dynamic nature. The key to harnessing this potential lies in the configurability of these temporary spaces, which can be tailored to match a myriad of production setups.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Configurability Enhances Ephemeral Environments
&lt;/h3&gt;

&lt;p&gt;Here are some ways configurability enhances ephemeral environments for rapid application development and testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customization of Infrastructure Components:&lt;/strong&gt; Teams can customize OS, servers, memory, and storage parameters to simulate various target environments. This customization ensures that applications are tested under conditions that closely replicate those they will encounter in real-world deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Utilization of Deployable Artifacts:&lt;/strong&gt; An essential aspect is the use of deployable artifacts, which are pre-built versions of software ready to be launched into the environment. These artifacts are essential for replicating the software deployment process and can range from binary executables to Docker containers, depending on the technology stack utilized.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Deployment Processes:&lt;/strong&gt; Automation is at the core of ephemeral environments, with pipelines designed to provision infrastructure, deploy applications, and tear down resources without manual intervention. Automated processes not only ensure efficiency but also contribute significantly to consistency across testing scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The streamlined deployment process not only saves time but also reduces errors by minimizing manual setup steps. By integrating these capabilities into ephemeral environments, teams can focus on developing and testing rather than managing infrastructure details.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Configurability in Ephemeral Environments
&lt;/h3&gt;

&lt;p&gt;By optimizing these elements within ephemeral environments, organizations can achieve a significant competitive edge—accelerating time-to-market while ensuring high-quality standards are met before any release.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Advantages of Ephemeral Environments over Traditional Staging Approaches
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Asynchronous Collaboration Across Time Zones
&lt;/h3&gt;

&lt;p&gt;Ephemeral environments facilitate asynchronous collaboration across distributed teams by providing on-demand access to consistent testing and development environments. This feature is a game-changer for global teams working across different time zones, enabling them to work together seamlessly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost-Effective Infrastructure
&lt;/h3&gt;

&lt;p&gt;Compared to traditional staging setups that require dedicated infrastructure and maintenance, ephemeral environments offer a more cost-effective solution. Since these environments are only activated when needed and decommissioned after use, they significantly reduce the overhead costs associated with maintaining permanent staging servers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agile and Scalable
&lt;/h3&gt;

&lt;p&gt;Ephemeral environments provide unmatched agility and scalability. Teams can quickly set up, modify, or tear down environments as required, thus facilitating flexible scaling and testing processes. This capability enables companies to adapt rapidly to changing requirements without incurring additional costs or delays.&lt;/p&gt;

&lt;p&gt;One key benefit of decreasing cycle time and per-use costs is that productivity and utilization will actually increase. As an example, a single shared environment might support one team for 24 hours of usage costs, but 24 teams or individuals can use one-hour ephemeral environments for the same overall cost. If appropriate auto-scaling is used, resource utilization costs could go to nearly zero when not used after hours or on the weekend, for example. However, utilization and productivity during normal work hours could skyrocket!&lt;/p&gt;

&lt;h3&gt;
  
  
  Increased Security and Reliability
&lt;/h3&gt;

&lt;p&gt;Another advantage of ephemeral environments over traditional staging approaches is their enhanced security and reliability. Since each environment is isolated and short-lived, the risk of lingering vulnerabilities or data breaches is minimized. Moreover, these dynamic environments can be replicated exactly as per production standards, ensuring reliable testing outcomes. Not only that, but security tests, penetration tests, and destructive testing can happen without affecting the live production site, enabling the security posture to be verified and tested before reaching production. This is a massive boost in confidence on security practices that most production environments miss out on.&lt;/p&gt;

&lt;p&gt;For a deeper dive into the benefits of ephemeral environments as part of &lt;a href="https://release.com/blog/environments-as-a-service-eaas-top-3-benefits"&gt;Environments as a Service (EaaS) offerings&lt;/a&gt;, you might find this article helpful.&lt;/p&gt;

&lt;p&gt;With these advantages in mind, it's clear why ephemeral environments are becoming an integral part of modern software development workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should you care?
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments are an innovative approach to software development that can greatly benefit your team. By creating temporary environments that closely resemble your production settings, you can streamline your development workflow and improve collaboration among team members, and make sure you stay competitive in your industry. &lt;/p&gt;

&lt;p&gt;Here are some key takeaways from this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Streamline your development workflow:&lt;/strong&gt; Ephemeral environments allow for faster iteration cycles, as you can quickly spin up new environments for testing and debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhance collaboration:&lt;/strong&gt; With on-demand setups, developers, QA teams, and stakeholders can easily access and work in the same environment, reducing communication barriers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improve testing strategies:&lt;/strong&gt; Ephemeral environments provide an isolated space for thorough validation of system behavior before deploying to production.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>development</category>
      <category>deployment</category>
      <category>environments</category>
    </item>
    <item>
      <title>Development vs Staging vs Production: What's the Difference?</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Mon, 21 Aug 2023 22:25:52 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/development-vs-staging-vs-production-whats-the-difference-43g2</link>
      <guid>https://dev.to/rwilsonrelease/development-vs-staging-vs-production-whats-the-difference-43g2</guid>
      <description>&lt;p&gt;The lines between development, staging, and production environments are often blurred. The distinctions may vary depending on many factors, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the scale of the organization,&lt;/li&gt;
&lt;li&gt;the codebase, or&lt;/li&gt;
&lt;li&gt;whether you're viewing the environment from a product, unit testing, or security standpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, I use interviews with fellow developers to understand each environment's purpose and how it's distinct from the others. It's particularly challenging to differentiate between the development and staging environments, and some organizations forgo the staging environment altogether. Let’s find out why, and when it makes sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Development Environment?
&lt;/h2&gt;

&lt;p&gt;Generally, the development environment is the first environment developers use to check if all code changes work well with each other. It's known as a “sandbox” for developers to work in. Examples of commonly used integrated development environments (IDEs) are &lt;a href="https://code.visualstudio.com/"&gt;Visual Studio Code&lt;/a&gt;, &lt;a href="https://www.eclipse.org/ide/"&gt;Eclipse&lt;/a&gt;, &lt;a href="https://www.jetbrains.com/"&gt;JetBrains tools&lt;/a&gt;, and many others. Note that historically development environments were based on a developer's laptop—a local machine, but with the emergence of cloud, on-demand computing and &lt;a href="https://releasehub.com/ephemeral-environments"&gt;ephemeral environments&lt;/a&gt;, those environments are now being deployed in the cloud.&lt;/p&gt;

&lt;p&gt;IDE is where developers’ workflow with code takes place, reloading and debugging aids are enabled here. Also, this environment is where developers can make necessary code changes. In the IDE, approved code can merge with code from other developers working on the same project.&lt;/p&gt;

&lt;p&gt;Developers commonly use this space to experiment and receive feedback on what improvements they can make to their work. Consequently, this environment is the most unstable. It's also the most susceptible to bugs and potentially broken code. But, on the upside, in allowing mistakes to happen, this is the most conducive environment to learn collaboratively and create a standardized process.  &lt;/p&gt;

&lt;p&gt;Besides the most commonly known local machine, there are virtual and cloud-based development environments. Your team might use the virtual and cloud-based environments mainly depending on whether multiple platforms and machines are needed to effectively test and run the code they are writing.&lt;/p&gt;

&lt;p&gt;Development environments historically only include a small subset of the entire application and often would lack elements like security, 3rd party APIs and cloud native services. Those would typically be introduced later in the development process and tested in staging. The result, however, turns into frequent rollbacks and bottlenecks in staging. To enable better code quality in development and more frequent release cycles, companies like &lt;a href="https://release.com/"&gt;Release&lt;/a&gt; came up with &lt;a href="https://release.com/ephemeral-environments"&gt;ephemeral environments&lt;/a&gt;, a production-like replica that allows developers to properly test their code (i.e shift-left) and isolate bugs to a single branch, while ensuring a smooth merge to staging and production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Staging Environment?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://release.com/staging-environments"&gt;Staging environment&lt;/a&gt; is the environment where your code is 'staged' prior to being run in front of users so you can ensure it works as designed. The staging environment should mirror production as much as possible. It reflects a production environment not yet exposed to clients, customers, and the general public.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--puSnKiww--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orsxvcao7nm56i64ji04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--puSnKiww--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orsxvcao7nm56i64ji04.png" alt="The staging environment should mirror production as much as possible" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This environment is primarily used for System Integration Testing (SIT) and in-depth manual testing conducted before the client receives code changes. Developers also &lt;a href="https://www.unitrends.com/blog/development-test-environments"&gt;perform&lt;/a&gt; quality assurance (QA), security testing, chaos testing, alpha testing, beta testing, and end-to-end (E2E) testing in this environment.&lt;/p&gt;

&lt;p&gt;Additionally, &lt;a href="https://release.com/user-acceptance-testing-with-ephemeral-environments"&gt;User acceptance testing (UAT)&lt;/a&gt; often happens here. In UAT, users can test changes they requested before the new code goes to a production environment.&lt;/p&gt;

&lt;p&gt;How you carry out testing in the staging environment can depend on what programming language you're using. For example, Ruby on Rails doesn't have a mode for staging. Rails developers switch modes to a test environment that they use to run testing tools and debug failures. Technically, the &lt;a href="https://guides.rubyonrails.org/configuring.html"&gt;Rails Guide&lt;/a&gt; delves into how to customize configurations and initialization on applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development vs. Staging Environments
&lt;/h2&gt;

&lt;p&gt;So, now that you know what development and staging environments are, you're probably wondering if you need both. Ultimately, the answer depends on the size of your organization, appetite for risk and speed of change, and your position on making a tradeoff between slowing down the process for quality and testing versus launching new features quickly.&lt;/p&gt;

&lt;p&gt;Sometimes smaller companies start out with fewer environments. One developer shared, “You just end up with multiple environments as the organization scales up.”  &lt;/p&gt;

&lt;p&gt;In some cases organizations with fewer users don't have staging environments. As another developer elaborated: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Instead, we can deploy in a way that 1% of the traffic will go to each one branch and main branch. Then, we can check the monitoring to see if there are differences between the two. When we are certain that at the most we will affect 1% of traffic and everything is fine, we will then proceed with merging the two branches. I think it would be ideal if the continuous integration (CI) and continuous deployment (CD) process were to set up that 1%, then we could verify the results. This is the same as I have seen for verifying front-end changes in continuous integration."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Is a Staging Environment Necessary?
&lt;/h2&gt;

&lt;p&gt;Deploying to staging is safe, because it will not affect users, but is not necessarily effective because you might not test all the features or combinations that end users will be using. The general solution to this problem is to deploy to production as quickly as you can but only enable or test subsets of new features with flags or canary testing. This way you are only risking challenges for a small subset of users, and are able to see the application perform with live traffic in the production environment.&lt;/p&gt;

&lt;p&gt;Developers say they like to see how real traffic works through the codebase and compare this technique to feature flagging. This may eliminate the need for a beta environment. This results in the concept of "staging" not being a distinct environment.&lt;/p&gt;

&lt;p&gt;However, developers agree that it's useful to have a separate beta domain to make significant changes. According to &lt;a href="https://www.atlassian.com/continuous-delivery/principles/feature-flags"&gt;Atlassian CI/CD&lt;/a&gt;, feature flagging allows developers to turn functionality on and off during runtime. That way, you don't need to deploy code at every update.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Production Environment?
&lt;/h2&gt;

&lt;p&gt;The production environment is the live site complete with performance optimizations. The codebase must be secure, performant, stable, and able to sustain heavy traffic as the client, customers, and public use it.  &lt;/p&gt;

&lt;p&gt;There is a common misconception that production is more important than development or staging. Actually, the reverse could be true: development environments could be so critical to the business that they cannot tolerate any downtime at all but production can tolerate some downtime.&lt;/p&gt;

&lt;p&gt;As an example at Truecar and in most other companies I worked at, the website could be broken for some amount of time as long as it came back up relatively quickly. However, if development was down for more than an hour, you could be looking at losing an entire day of developer features for the whole company!&lt;/p&gt;

&lt;p&gt;Regardless of your setup, you should treat production with care, and restrict who updates the production code. Ideally, you won't be building new versions of the codebase for the production environment; it's better to deploy the same builds to the staging environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qOcaAuGZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zvsgk4sve0zt4k71e3hn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qOcaAuGZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zvsgk4sve0zt4k71e3hn.png" alt="The production environment is the live site complete with performance optimizations" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point of the software development lifecycle, the code shouldn't have any bugs or require fixes. To avoid a poor user experience, you should consider it the final product.&lt;/p&gt;

&lt;p&gt;However, you can make urgent fixes in the production environment if needed. In doing so, you can consistently improve upon quality control for product releases, making it easier to keep tabs on new product updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Although the development, staging, and production environments converge, they have their own significance in the larger software development lifecycle. The significance of each environment depends on the organization running the system.&lt;/p&gt;

&lt;p&gt;The way a company treats and leverages these environments today differs wildly depending on the organization and its DevOps practices and policies. Sometimes teams within the same organization use these environments in different ways and have different philosophies of what they mean, and how critical they are to the company’s mission.&lt;/p&gt;

&lt;p&gt;From my conversations with individuals who play different roles in the tech industry, I can say the overall development culture is shifting progressively toward promoting new code to all these environments as soon as possible. One developer expressed,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The idea is that even the smallest code change gets released to production in a matter of minutes, not months."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With that in mind, the common goal is that the folks responsible for the software development life cycle want more efficient environments for producing the highest quality codebases. These people continuously strive to find new methods to make that process easier.&lt;/p&gt;

&lt;p&gt;For a better understanding of what environments are and to be inspired about optimizing them, read more about &lt;a href="https://release.com/staging-environments"&gt;staging environments&lt;/a&gt;, &lt;a href="https://release.com/ephemeral-environments"&gt;ephemeral environments&lt;/a&gt;, and &lt;a href="https://release.com/user-acceptance-testing-with-ephemeral-environments"&gt;UAT&lt;/a&gt; with Release ephemeral environments.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>eaas</category>
      <category>devops</category>
      <category>sre</category>
    </item>
    <item>
      <title>Using Docker Build and BuildX to Host Production Websites in Kubernetes</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Fri, 04 Aug 2023 22:16:26 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/using-docker-build-and-buildx-to-host-production-websites-in-kubernetes-5g65</link>
      <guid>https://dev.to/rwilsonrelease/using-docker-build-and-buildx-to-host-production-websites-in-kubernetes-5g65</guid>
      <description>&lt;h2&gt;
  
  
  Introdction
&lt;/h2&gt;

&lt;p&gt;Docker is all the rage these days and hosting containers in production is becoming the norm. Complex application stacks have evolved beyond the &lt;code&gt;t2.small&lt;/code&gt; instances running a LAMP stack in AWS. But what are you going to do if your successful business is built on a wonderful PHP shopping site and you are expected to learn all the ins and outs of deploying and orchestrating images on today's advanced platforms? As each day passes, more and more of our t2 instances running RHEL5 and RHEL6 are being torn down by AWS and not able to be replaced by any suitable alternatives.&lt;/p&gt;

&lt;p&gt;This guide will help you avoid orchestration and deployment woes by hosting your production PHP application using &lt;code&gt;docker build&lt;/code&gt; and a new easy-mode entry into Kubernetes with &lt;code&gt;docker bake&lt;/code&gt;. The examples presented will show a modern PHP7 stack running on RHEL8 using the latest innovations available to production workloads today. You do not need to learn complex ideas about &lt;code&gt;docker push&lt;/code&gt;, &lt;code&gt;docker run&lt;/code&gt;, &lt;code&gt;kubectl&lt;/code&gt;, and more. With this example you can be up and running in the modern era in as little as 30 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting Simple with Docker Build
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;docker build&lt;/code&gt; is a beginners' command that allows you to execute complex workflows and shell scripts inside a container without complex infrastructure or expertise. If you are new to &lt;code&gt;docker build&lt;/code&gt;, you can find some good resources online. Basically, you would create a &lt;code&gt;Dockerfile&lt;/code&gt; and then provide instructions on which OS to choose, which yum packages to install, copy your code into the directories, and then run your website commands from there.&lt;/p&gt;

&lt;p&gt;This example is based on the &lt;a href="https://developers.redhat.com/blog/2020/03/24/red-hat-universal-base-images-for-docker-users#introducing_red_hat_universal_base_images"&gt;excellent blog post&lt;/a&gt; by RedHat.&lt;/p&gt;

&lt;p&gt;Save the following example in your CVS folder where your PHP code lives and name it &lt;code&gt;Dockerfile.rhel8&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM registry.access.redhat.com/ubi8/ubi:8.1 AS runtime

ARG NGROK_TOKEN

#RUN yum --disableplugin=subscription-manager -y module enable php:7.3 \
#  &amp;amp;&amp;amp; yum --disableplugin=subscription-manager -y install httpd php wget \
#  &amp;amp;&amp;amp; yum --disableplugin=subscription-manager clean all
RUN yum --disableplugin=subscription-manager -y install httpd php wget \
  &amp;amp;&amp;amp; yum --disableplugin=subscription-manager clean all

RUN wget --no-verbose 'https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz' \
  &amp;amp;&amp;amp; tar xvzf ngrok-v3-stable-linux-amd64.tgz -C /usr/local/bin \
  &amp;amp;&amp;amp; rm -f ngrok-v3-stable-linux-amd64.tgz

COPY *.php /var/www/html

RUN sed -i 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf \
  &amp;amp;&amp;amp; sed -i 's/listen.acl_users = apache,nginx/listen.acl_users =/' /etc/php-fpm.d/www.conf \
  &amp;amp;&amp;amp; mkdir /run/php-fpm \
  &amp;amp;&amp;amp; chgrp -R 0 /var/log/httpd /var/run/httpd /run/php-fpm \
  &amp;amp;&amp;amp; chmod -R g=u /var/log/httpd /var/run/httpd /run/php-fpm

FROM runtime

COPY *.php /var/www/html

RUN php-fpm &amp;amp; httpd -D FOREGROUND &amp;amp; NGROK_TOKEN=$NGROK_TOKEN ngrok http 8080 --log=stdout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file will create a docker container and run all of the setup commands you need to install the full LAMP stack you are used to without any infrastructure or devops help. In each step, we execute setup commands and download &lt;code&gt;yum&lt;/code&gt; packages to add software to your running container.&lt;/p&gt;

&lt;p&gt;You will notice that although &lt;code&gt;docker build&lt;/code&gt; is very powerful and can run any command you tell it to, it will not allow you to connect to any network resources for safety and security. That is, normally, you could just run &lt;code&gt;nginx&lt;/code&gt; and then visit &lt;code&gt;http://localhost:8080&lt;/code&gt; to visit your website. The &lt;code&gt;nginx&lt;/code&gt; command is running on &lt;code&gt;localhost&lt;/code&gt; but you will not be able to access it. It is a different &lt;code&gt;localhost&lt;/code&gt; inside the container. You could call it &lt;code&gt;remotehost&lt;/code&gt; in this scenario.&lt;/p&gt;

&lt;p&gt;All of these networking details are securely hidden by the safety precautions of &lt;code&gt;docker build&lt;/code&gt;. You will use these safety features when you get to deploy to production because the networking features are not available with the &lt;code&gt;docker build&lt;/code&gt; commands. So we use ngrok to go around these details.&lt;/p&gt;

&lt;p&gt;The ngrok tunnel command can be customised to use a custom domain on the endpoint you select and where you issue the correct ngrok token. To run your website locally, simply type this on your laptop (this is not production yet, so you can use an auto-generated hostname at this point):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;read -p "Input token? " NGROK_TOKEN
export NGROK_TOKEN
docker build -f Dockerfile.rhel8 -t mywebsiteapp:1 --build-arg NGROK_TOKEN=$NGROK_TOKEN .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will ask for your production ngrok token for hosting and then your website will be available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input token? *********
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

#2 [internal] load build definition from Dockerfile.rhel8
#2 transferring dockerfile: 940B done
#2 DONE 0.0s

#3 [internal] load metadata for registry.access.redhat.com/ubi8/ubi:8.1
#3 DONE 0.5s

#4 [1/6] FROM registry.access.redhat.com/ubi8/ubi:8.1@sha256:1f0e6e1f451ff020b3b44c1c4c34d85db5ffa0fc1bb0490d6a32957a7a06b67f
#4 DONE 0.0s

#5 [2/6] RUN yum --disableplugin=subscription-manager -y module enable php:7.3   &amp;amp;&amp;amp; yum --disableplugin=subscription-manager -y install httpd php wget   &amp;amp;&amp;amp; yum --disableplugin=subscription-manager clean all
#5 CACHED

#6 [internal] load build context
#6 transferring context: 30B done
#6 DONE 0.0s

#7 [3/6] RUN wget --no-verbose 'https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz'   &amp;amp;&amp;amp; tar xvzf ngrok-v3-stable-linux-amd64.tgz -C /usr/local/bin &amp;amp;&amp;amp; rm -f ngrok-v3-stable-linux-amd64.tgz
#7 3.511 2023-08-04 21:41:43 URL:https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz [8826207/8826207] -&amp;gt; "ngrok-v3-stable-linux-amd64.tgz" [1]
#7 3.519 ngrok
#7 DONE 3.8s

#8 [4/6] ADD index.php /var/www/html
#8 DONE 0.1s

#9 [5/6] RUN sed -i 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf   &amp;amp;&amp;amp; sed -i 's/listen.acl_users = apache,nginx/listen.acl_users =/' /etc/php-fpm.d/www.conf   &amp;amp;&amp;amp; mkdir /run/php-fpm   &amp;amp;&amp;amp; chgrp -R 0 /var/log/httpd /var/run/httpd /run/php-fpm   &amp;amp;&amp;amp; chmod -R g=u /var/log/httpd /var/run/httpd /run/php-fpm
#9 DONE 0.5s

#10 [6/6] RUN php-fpm &amp;amp; httpd -D FOREGROUND &amp;amp; NGROK_TOKEN=xyzzy ngrok http 8080 --log=stdout
#10 0.586 t=2023-08-04T21:41:44+0000 lvl=info msg="no configuration paths supplied"
#10 0.586 t=2023-08-04T21:41:44+0000 lvl=info msg="ignoring default config path, could not stat it" path=/root/.config/ngrok/ngrok.yml
#10 0.587 t=2023-08-04T21:41:44+0000 lvl=info msg="starting web service" obj=web addr=127.0.0.1:4040 allow_hosts=[]
#10 0.601 AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
#10 0.941 t=2023-08-04T21:41:44+0000 lvl=info msg="client session established" obj=tunnels.session obj=csess id=c8fe54e8acb1
#10 0.941 t=2023-08-04T21:41:44+0000 lvl=info msg="tunnel session started" obj=tunnels.session
#10 1.050 t=2023-08-04T21:41:44+0000 lvl=info msg="started tunnel" obj=tunnels name=command_line addr=http://localhost:8080 url=https://999-99-999-9-99.ngrok.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point your website is running and you will be able to get your browser to load your new website in your browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--igllBWmQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hviqqv2u6f1tza3j9pch.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--igllBWmQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hviqqv2u6f1tza3j9pch.jpg" alt="Image description" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that you do not need to use any complicated &lt;code&gt;push&lt;/code&gt;, &lt;code&gt;pull&lt;/code&gt;, or &lt;code&gt;run&lt;/code&gt; commands. The container simply runs and waits for you to test your code.&lt;/p&gt;

&lt;h2&gt;
  
  
  I Feel the Ache, the Cache for Speed
&lt;/h2&gt;

&lt;p&gt;You may notice that Docker will not cache the final results for your build because it does not complete a full layer. This is by design so that your code is secured and not stored anywhere that is unsafe. However, by utilising the &lt;a href="https://docs.docker.com/build/building/multi-stage/#use-multi-stage-builds"&gt;multi-stage dockerfile&lt;/a&gt; available to the most elite Docker users, you can see that subsequent builds will be very fast, slowing down only to copy your code base after it is updated or when you restart the development environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker build -f Dockerfile.rhel -t rhel8:4 . --build-arg NGROK_TOKEN=$NGROK_TOKEN --progress plain
#1 [internal] load build definition from Dockerfile.rhel
#1 transferring dockerfile: 966B done
#1 DONE 0.0s

#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

#3 [internal] load metadata for registry.access.redhat.com/ubi8/ubi:8.1
#3 DONE 0.3s

#4 [runtime 1/4] FROM registry.access.redhat.com/ubi8/ubi:8.1@sha256:1f0e6e1f451ff020b3b44c1c4c34d85db5ffa0fc1bb0490d6a32957a7a06b67f
#4 DONE 0.0s

#5 [runtime 2/4] RUN yum --disableplugin=subscription-manager -y module enable php:7.3   &amp;amp;&amp;amp; yum --disableplugin=subscription-manager -y install httpd php wget   &amp;amp;&amp;amp; yum --disableplugin=subscription-manager clean all
#5 CACHED

#6 [runtime 3/4] RUN wget --no-verbose 'https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz'   &amp;amp;&amp;amp; tar xvzf ngrok-v3-stable-linux-amd64.tgz -C /usr/local/bin &amp;amp;&amp;amp; rm -f ngrok-v3-stable-linux-amd64.tgz
#6 CACHED

#7 [internal] load build context
#7 transferring context: 30B done
#7 DONE 0.0s

#8 [runtime 4/4] RUN sed -i 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf   &amp;amp;&amp;amp; sed -i 's/listen.acl_users = apache,nginx/listen.acl_users =/' /etc/php-fpm.d/www.conf   &amp;amp;&amp;amp; mkdir /run/php-fpm   &amp;amp;&amp;amp; chgrp -R 0 /var/log/httpd /var/run/httpd /run/php-fpm   &amp;amp;&amp;amp; chmod -R g=u /var/log/httpd /var/run/httpd /run/php-fpm
#8 DONE 0.4s

#9 [stage-1 1/2] COPY index.php /var/www/html
#9 DONE 0.1s

#10 [stage-1 2/2] RUN php-fpm &amp;amp; httpd -D FOREGROUND &amp;amp; NGROK_TOKEN= ngrok http 8080 --log=stdout
#10 0.477 t=2023-08-05T22:33:54+0000 lvl=info msg="no configuration paths supplied"
#10 0.477 t=2023-08-05T22:33:54+0000 lvl=info msg="ignoring default config path, could not stat it" path=/root/.config/ngrok/ngrok.yml
#10 0.478 t=2023-08-05T22:33:54+0000 lvl=info msg="starting web service" obj=web addr=127.0.0.1:4040 allow_hosts=[]
#10 0.487 AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
#10 0.816 t=2023-08-05T22:33:54+0000 lvl=info msg="client session established" obj=tunnels.session obj=csess id=88117dc8c891
#10 0.816 t=2023-08-05T22:33:54+0000 lvl=info msg="tunnel session started" obj=tunnels.session
#10 0.906 t=2023-08-05T22:33:54+0000 lvl=info msg="started tunnel" obj=tunnels name=command_line addr=http://localhost:8080 url=https://abcde-99-999-9-99.ngrok.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see how blazingly fast the steps are now that the previous layers have been cached and your code is safe and secure in memory. The previous build took over 6 seconds to complete, while the cached version took less than 0.7 seconds, a savings of just under 90%!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating your code locally before running in production
&lt;/h2&gt;

&lt;p&gt;Obviously you will want to make some code changes and adjust and test your site before you are ready to publish to production. No problem, use the following example on in another terminal in your laptop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd production-web-site
cvs update
pico sales.php
&amp;lt;make your changes and save&amp;gt;
cvs add sales.php
cvs commit -m "Add thinner border and show postal code" .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switch back to your original docker build command terminal and hit APPLE-CMD-C to terminate the original site. Then run version two of your site again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;read -p "Input token? " NGROK_TOKEN
export NGROK_TOKEN
docker build -f Dockerfile.rhel8 -t mywebsiteapp:2 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Everyone wants to deploy to Kubernetes, but it is difficult, if not impossible. However, it is relatively simple to add a &lt;code&gt;buildx&lt;/code&gt; builder to your cluster to deploy production-level quality for your containers. Again, no push and pull and run complexities are required.&lt;/p&gt;

&lt;p&gt;This new feature of &lt;code&gt;docker build&lt;/code&gt; that we rely on is the ability to deploy running containers to Kubernetes, using the &lt;code&gt;docker build bake&lt;/code&gt; commands. The &lt;code&gt;bake&lt;/code&gt; commands are an extension of &lt;code&gt;build&lt;/code&gt;, much like you would build an oven and then bake a cake. It is the same with the code.&lt;/p&gt;

&lt;p&gt;To be able to run your production website on Kubernetes with &lt;code&gt;bake&lt;/code&gt; you will first create the &lt;code&gt;buildx&lt;/code&gt; instance pod that will run your production website:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker buildx create mywebsite --driver kubernetes --driver-opt nodeselector=kubernetes.io/arch=amd64
watoosi-canal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can list your builders with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker buildx ls
NAME/NODE       DRIVER/ENDPOINT             STATUS  BUILDKIT PLATFORMS
watoosie-canal* kubernetes
  watoosie-canal0 ......... running v0.19.4  linux/amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, you will need to create a &lt;code&gt;bake&lt;/code&gt; file which is very similar to a recipe for your code to go into the Kubernetes oven.&lt;/p&gt;

&lt;p&gt;A quick note on the ngrok configuration: you can supply your credentials to the &lt;code&gt;ngrok.yaml&lt;/code&gt; configuration file with hostnames and so forth. This is the recommended way to set the custom domain name, web and security settings, and so forth. You can visit the &lt;a href="https://ngrok.com/docs/secure-tunnels/ngrok-agent/reference/config/"&gt;ngrok documentation&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;Simply create a file like this called &lt;code&gt;docker-bake.prod.hcl&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;target "webapp-release" {
  dockerfile = "Dockerfile.rhel8"
  tags = ["mywebsiteapp"]
  platforms = ["linux/amd64"]
  args = {
    NGROK_TUNNEL = "&amp;lt;insert credentials here&amp;gt;"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the following command to connect to Kubernetes, deploy the build container and run your website!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker buildx bake -f docker-bake.prod.hcl webapp-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(if your website crashes or you want to restart it, you might need to find the build instance and use it as follows:)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker buildx ls
....
$ docker buildx use &amp;lt;name of instance&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visit your production website in the browser and you will see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UsTSIcDq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qiwnaxlze25vd3u2hmfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UsTSIcDq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qiwnaxlze25vd3u2hmfv.png" alt="Image description" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you use &lt;code&gt;docker build&lt;/code&gt; in production, I'd like to hear about it! We can share the best practices and write more blog posts to share knowledge.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@jramos10?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Josue Isai Ramos Figueroa&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/qvBYnMuNJ9A?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>webdev</category>
      <category>php</category>
    </item>
    <item>
      <title>Syncing Databases: How to Do It and Best Practices</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Fri, 16 Jun 2023 00:17:16 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/syncing-databases-how-to-do-it-and-best-practices-3b4j</link>
      <guid>https://dev.to/rwilsonrelease/syncing-databases-how-to-do-it-and-best-practices-3b4j</guid>
      <description>&lt;p&gt;As an engineering leader or developer, you may have encountered the need to sync multiple copies of a database to ensure data consistency across different systems or locations. Whether you're working on a distributed system, a mobile application, or a cloud-based platform, syncing the databases is a crucial task that requires careful planning and execution.&lt;/p&gt;

&lt;p&gt;In this article, we'll look at database synchronization and what to use it for. Next, we'll explore different types of synchronization processes and how to sync databases step by step. We'll also cover some helpful tooling that makes syncing your databases easier and faster. Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Database Synchronization?
&lt;/h2&gt;

&lt;p&gt;Database synchronization is the process of keeping multiple copies of a database in sync with one another.&lt;/p&gt;

&lt;p&gt;There are different ways of synchronizing databases, depending on the type of database, the network infrastructure, and the application's requirements. Some of the standard &lt;strong&gt;methods&lt;/strong&gt; of database synchronization include the following:&lt;/p&gt;

&lt;h3&gt;
  
  
  Two-way synchronization
&lt;/h3&gt;

&lt;p&gt;In two-way synchronization, both databases can make changes and synchronize with each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  One-way synchronization
&lt;/h3&gt;

&lt;p&gt;In one-way synchronization, one database acts as the source, and the other databases are updated to match it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incremental synchronization
&lt;/h3&gt;

&lt;p&gt;With incremental synchronization, changes are only made since the last synchronization.&lt;/p&gt;

&lt;h3&gt;
  
  
  File-based synchronization
&lt;/h3&gt;

&lt;p&gt;With file-based synchronization, data is exported to a file and imported into the other databases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IcrVmHkI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ed877hm3t09kljvbge3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IcrVmHkI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ed877hm3t09kljvbge3.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases for Synchronization
&lt;/h2&gt;

&lt;p&gt;Database synchronization is used in various situations where multiple copies of a database are in use and when it's necessary to ensure that the data in each copy is consistent. Some common use cases include the following:&lt;/p&gt;

&lt;h3&gt;
  
  
  Backup and recovery
&lt;/h3&gt;

&lt;p&gt;You can use database synchronization to keep a secondary copy of a database in sync with the primary copy, providing a way to recover from data loss or corruption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mobile and offline applications
&lt;/h3&gt;

&lt;p&gt;Applications that work offline or on mobile devices may need to synchronize data with a central database when a connection becomes available. Database synchronization ensures that the data on the mobile device or offline application is consistent with the data on the central server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collaborative platforms
&lt;/h3&gt;

&lt;p&gt;Multiple users may work on the same data in a collaborative platform. Here, database synchronization ensures that changes made by one user propagate to all other users, maintaining data consistency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Systems
&lt;/h3&gt;

&lt;p&gt;In a distributed system, multiple copies of a database may run on different servers or in different locations. Database synchronization ensures that changes made to one copy of the database propagate to all other copies, maintaining data consistency across the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud-based systems
&lt;/h3&gt;

&lt;p&gt;Cloud-based systems often have multiple copies of a database running in different regions to provide high availability and reduce latency. Database synchronization ensures that data is consistent across all copies of the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Database Synchronization
&lt;/h2&gt;

&lt;p&gt;There are several different types of database synchronization, each with its own advantages and disadvantages. Some common types include the following:&lt;/p&gt;

&lt;h3&gt;
  
  
  Source/Replica replication
&lt;/h3&gt;

&lt;p&gt;In this type of replication, one database acts as the source, and the other databases are updated to match it. The source database receives all updates and changes, which then propagate to the replica databases. It's commonly used for read-heavy workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-master replication
&lt;/h3&gt;

&lt;p&gt;In this type of replication, all databases can act as both sources and replicas. Changes made to one database are propagated to all other databases, ensuring that all copies of the data are consistent. This type of replication is helpful for write-heavy workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  File-based synchronization
&lt;/h3&gt;

&lt;p&gt;In this type of synchronization, data is exported to a file and then imported into the other databases. It's a simple method that's easy to implement, but it can be slow and may not be suitable for large amounts of data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Log-based replication
&lt;/h3&gt;

&lt;p&gt;In this type of replication, changes made to a database are recorded in a log and then propagated to the other databases. This allows for fast and efficient replication but can be more complex to set up and maintain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trigger-based replication
&lt;/h3&gt;

&lt;p&gt;In this type of replication, triggers are set up on the source database to capture changes, which can then be propagated to the target database. It allows for fine-grained control over which changes are propagated, but it can be resource-intensive and may be unsuitable for high-traffic systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud-based database synchronization
&lt;/h3&gt;

&lt;p&gt;We can also use cloud services like &lt;a href="https://aws.amazon.com/dms/"&gt;AWS Database Migration Service (DMS)&lt;/a&gt; and &lt;a href="https://azure.microsoft.com/en-us/products/database-migration"&gt;Azure Database Migration Service&lt;/a&gt; to sync databases. This is a good option if you have a cloud-based infrastructure and want to leverage the scalability and reliability offered by these services.&lt;/p&gt;

&lt;p&gt;The best approach for your use case will depend on the type of data, the number of databases, the network infrastructure, and the requirements of the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍How to Sync Databases
&lt;/h2&gt;

&lt;p&gt;The process of syncing databases can vary, depending on the type of databases, the method of synchronization, and the specific requirements of the application. Below is the step-by-step guide on how to sync databases:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Understand your use case
&lt;/h3&gt;

&lt;p&gt;Understand the specific requirements of your use case and choose the synchronization method that best fits those needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Identify the databases to be synced
&lt;/h3&gt;

&lt;p&gt;Determine which databases need to be synced and the type of data they contain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Choose a synchronization method
&lt;/h3&gt;

&lt;p&gt;Decide on the method of synchronization that's most appropriate for your use case. This may be replication, a data syncing tool, a custom script, or a cloud-based service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Configure the databases
&lt;/h3&gt;

&lt;p&gt;Set up the databases for synchronization. This may include configuring replication settings, installing data syncing tools, or writing custom scripts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Test the synchronization
&lt;/h3&gt;

&lt;p&gt;Test the synchronization by making changes to one database and verifying that the changes are propagated to the other databases. This will help you identify any issues or bugs before deploying it in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Schedule synchronization
&lt;/h3&gt;

&lt;p&gt;Set a schedule for the synchronization to occur regularly. You can do this using a built-in scheduling feature or by writing a custom script.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Monitor and troubleshoot
&lt;/h3&gt;

&lt;p&gt;Monitor the synchronization process and troubleshoot any issues that arise. This may include monitoring replication lag, checking for errors, and addressing any conflicts that occur. It may also include monitoring replication lag, checking for errors, and addressing any conflicts that occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Maintain and update
&lt;/h2&gt;

&lt;p&gt;Regularly maintain and update the synchronization process to ensure that it continues to function correctly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Some steps may vary, depending on the type of database and the method of synchronization. For example, for a cloud-based service like AWS DMS, the process can be simpler. You have to create the replication task and configure the source and target databases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Tooling for Database Synchronization
&lt;/h2&gt;

&lt;p&gt;There are various tools available for database synchronization, depending on the type of database and the method of synchronization you're using. You can achieve database synchronization through various methods, such as the following:&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom scripts
&lt;/h3&gt;

&lt;p&gt;You can write custom scripts using programming languages such as Python or Java to sync databases. This involves writing code to compare data in two databases and making changes as needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Replication
&lt;/h3&gt;

&lt;p&gt;This involves copying data from one database to another so that changes made to one are reflected in the other.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/replication.html"&gt;MySQL&lt;/a&gt; provides built-in replication capabilities, allowing users to replicate data between two or more MySQL servers.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/replication/sql-server-replication?view=sql-server-ver16"&gt;Microsoft SQL Server&lt;/a&gt; also provides built-in replication capabilities, including transactional replication and merge replication.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.postgresql.org/docs/current/runtime-config-replication.html"&gt;PostgreSQL&lt;/a&gt; offers several replication solutions, including streaming replication and &lt;a href="https://www.postgresql.org/docs/15/logical-replication.html"&gt;logical replication&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.mongodb.com/docs/manual/replication/"&gt;MongoDB&lt;/a&gt; provides built-in replication features, including replica sets and sharded clusters for horizontal scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloud-based services
&lt;/h3&gt;

&lt;p&gt;Below are some of the common cloud-based services that you can use to sync databases.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/dms/"&gt;AWS Database Migration Service (DMS)&lt;/a&gt; can migrate, replicate, and sync databases between different platforms and environments, including on-premises environments and cloud-based environments like Amazon Web Services (AWS).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://azure.microsoft.com/en-us/products/database-migration"&gt;Azure Database Migration Service&lt;/a&gt; is a fully managed service designed to enable seamless migrations from multiple database sources to Azure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data syncing tools
&lt;/h3&gt;

&lt;p&gt;Various data syncing tools can automate the process of keeping databases in sync.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.symmetricds.org/"&gt;SymmetricDS&lt;/a&gt; is open-source data synchronization software that supports multiple relational databases, including MySQL, PostgreSQL, Oracle, and more.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.talend.com/"&gt;Talend&lt;/a&gt;, &lt;a href="https://www.informatica.com/"&gt;Informatica&lt;/a&gt;, and &lt;a href="https://boomi.com/"&gt;Boomi&lt;/a&gt; can automate the process of keeping databases in sync.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.oracle.com/in/integration/goldengate/"&gt;Oracle GoldenGate&lt;/a&gt; is a real-time data integration and replication software for heterogeneous environments, including Oracle, SQL Server, DB2, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are some of the popular tooling used for database synchronization. However, the best tool depends on the type of database, the method of synchronization, and the specific requirements of the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;p&gt;When working with database synchronization, there are several best practices that can help you ensure that your synchronization process is efficient and reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep your databases in sync
&lt;/h3&gt;

&lt;p&gt;Regularly check and compare data between the different copies of the database and make updates as needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use a replication tool
&lt;/h3&gt;

&lt;p&gt;Use a replication tool that best fits your use case and the type of data you're working with.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backups
&lt;/h3&gt;

&lt;p&gt;Regularly back up your databases to ensure that you can recover from data loss or corruption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use cloud-based services
&lt;/h3&gt;

&lt;p&gt;You can also use Cloud-based services like AWS DMS and Azure Database Migration Service to sync databases. It's a good option if you have a cloud-based infrastructure and want to leverage the scalability and reliability offered by these services.&lt;br&gt;
‍&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--myyJ5QJl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tfjacjusy3ft6adhhav6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--myyJ5QJl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tfjacjusy3ft6adhhav6.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;Ensure that your synchronization process is secure by encrypting data in transit and at rest and by implementing access controls.&lt;/p&gt;

&lt;p&gt;By following these best practices, you can ensure that your databases are kept in sync and that your synchronization process is efficient and reliable. Additionally, it's always important to be aware of the particularities of the database you're working with, the replication tool you're using, and the type of data you're syncing in order to apply the best practices in a way that fits your specific needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Syncing databases is a crucial task that requires careful planning and execution. By understanding your use case, choosing the right synchronization method, and following best practices such as testing, monitoring, and maintaining, you can ensure that your databases sync effectively and efficiently. Additionally, by using cloud-based services, you can reduce the complexity of the process and ensure that your databases are always in sync.&lt;/p&gt;

</description>
      <category>database</category>
      <category>replication</category>
      <category>dba</category>
      <category>data</category>
    </item>
    <item>
      <title>Full Fidelity Data for Ephemeral Environments</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Thu, 09 Mar 2023 17:57:15 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/full-fidelity-data-for-ephemeral-environments-1obl</link>
      <guid>https://dev.to/rwilsonrelease/full-fidelity-data-for-ephemeral-environments-1obl</guid>
      <description>&lt;p&gt;This blog post is a follow-up to the wildly successful webinar on the same topic. Refer to our webinar “&lt;a href="https://www.youtube.com/watch?v=TAhz_UxPWG4"&gt;Full Fidelity Data for Ephemeral Environments&lt;/a&gt;” for more information. In the webinar and this post, we will explore why you should use production-like data in ephemeral environments, how you can (and should) do so, and how to reduce the cost, complexity, and avoid difficulties associated with using production-like data at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://release.com/ephemeral-environments"&gt;Ephemeral environments&lt;/a&gt; are an indispensable tool in almost any software development &lt;a href="https://docs.release.com/reference-documentation/workflows-in-release"&gt;workflow&lt;/a&gt;. By creating environments for testing and pre-production phases of the workflow, advanced software development teams can shift testing and verification “left” (that is, earlier in the life cycle rather than later when features or bugs reach production).&lt;/p&gt;

&lt;p&gt;The reason that environments should be ephemeral is that they can be set up on-demand with specific features and/or branches deployed in them and then torn down when the task is completed. Each feature or branch of the development code base can be tested in an isolated environment as opposed to the legacy shared and fixed stages of QA, &lt;a href="https://release.com/staging-environments"&gt;Staging&lt;/a&gt;, &lt;a href="https://release.com/user-acceptance-testing-with-ephemeral-environments"&gt;User Acceptance Testing (UAT)&lt;/a&gt;, and so forth.&lt;/p&gt;

&lt;p&gt;Ephemeral environments are always used for a specific purpose so it is clear which feature and code base is deployed. The data is ideally based on a very close approximation to production so that the feature under development or test can be deployed into production with as much confidence as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Production-Like Data in Ephemeral Environments?
&lt;/h2&gt;

&lt;p&gt;The best way to test new features or fix bugs that exist in production is to use the same code and data from the production environment. Version Control Systems (VCS) like git solved this problem  decades ago, at least for  code. But the data portion has long lagged behind due to complexity in access and cost, which  we will address toward the end of this post.&lt;/p&gt;

&lt;p&gt;Testing code against stale or seed data is fine for automated testing, but when developing new features or diagnosing problems in production, the data should mimic production as closely as possible. Otherwise, you are risking chasing the same fugs over and over again, or launching a faulty feature.  &lt;/p&gt;

&lt;p&gt;It is rare that data in production is static and unchanging; if it were, you could make the database read-only! While some amount of read-only or write-rarely data exists, it almost always needs to be updated and changed at some point, the only difference is a matter of frequency of updates.&lt;/p&gt;

&lt;p&gt;Because the data is a living and breathing thing that changes and evolves constantly  and possibly is updated based on customer live inputs or actions, the legacy strategy of fixed QA, staging, and test databases get out of date extremely quickly. In my experience, a fixed environment will stray from production data in as short a time as a few days or weeks. Many times the QA or staging databases are several years out of date from production, unless you specifically “backport” data from production.&lt;/p&gt;

&lt;p&gt;Lastly, production databases and datasets are often quite large (and grow larger every day) compared to fixed QA and staging databases. Thus, testing on limited data or fake seed data when developing new features or changing the code base can introduce unexpected regressions in performance or bugs when large results are pulled out of a table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should I Really Use Production-like Data in Ephemeral Environments?
&lt;/h2&gt;

&lt;p&gt;Short answer, yes. However, you need to be mindful of the possible moral, ethical, and legal implications in using actual production data subject to HIPPA or other regulatory controls. This is why it is crucial to generate a so-called Golden Copy of the data that is scrubbed of any private or confidential data, while still maintaining the same qualities as production in terms of size, statistical parameters, and so forth. This Golden Copy is the source of truth that is  updated frequently from your actual production data. We recommend daily updates, but depending on your particular use case it can be more or less often.&lt;/p&gt;

&lt;p&gt;With the  Golden Copy that is sufficiently production-like (each case varies in how much fidelity to production is required), it is possible to accurately portray behavior features as they would occur in production. Transactions, events, customer (faked) accounts, and so forth are all available and up-to-date with real (-ish) data. For example, if there is a bug that manifests in production, an engineer could easily reproduce the bug with a high degree of confidence, and either validate or develop fixes for the problem.&lt;/p&gt;

&lt;p&gt;Testing features with fake or fixed data is suitable for automated testing but for many use cases, especially when testing with non-technical users, real production-like data is valuable to ensure the feature not only works properly but also looks correct when it reaches production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Isn’t that Terribly Difficult, Expensive, and Prohibitive? Or, “My Database is too Unique.”
&lt;/h2&gt;

&lt;p&gt;The most common objections to using a production-like dataset are around the difficulties in creating, managing and accessing the dataset, and around the overall cost. I can try to address these in two points: the first one (creation/maintenance/access) can be  pretty difficult depending on your use case, but it can be solved; the second one (cost) can readily be handled in this modern era of cloud computing and on-demand managed services offered by cloud providers.&lt;/p&gt;

&lt;p&gt;The first problem is that access to production data is usually kept in the hands of a small group of people, and those people are usually not the software engineers who are developing code and testing new features. Separation of concerns and responsibilities is vital for keeping production data clean and secure, but this causes problems when engineers need to test new features, verify bugs in production, or experiment with changes that need to be made.&lt;/p&gt;

&lt;p&gt;The best practice is to generate a Golden Copy of the data that you need as mentioned above. The Golden Copy should be a cleaned, stable, up-to-date version of the data that exist in production, but without any data that could compromise confidentiality or proprietary information if it were to accidentally be exposed, (or even internally exposed).&lt;/p&gt;

&lt;p&gt;What I tell most people who are involved with production datasets is to create a culture of welcoming access to the Golden Copy and distributing the data internally on a self-service model so that anyone who would like the latest snapshot can access it relatively easily and without a lot of hoops to jump through. Making the data available will ensure that your cleaning and scrubbing process is actually working properly and that your test data is going to have a high degree of similarity to the data in production. This will only make your production data and operations cleaner and more efficient, I promise you.&lt;/p&gt;

&lt;p&gt;The second problem is that allowing software engineers, QA people, testers, and even product people access to these datasets comes at a cost. Every database will typically cost a certain amount of money to run and store the data, but there are definitely some optimisations you can implement to keep costs down while still enjoying access to high quality datasets.&lt;/p&gt;

&lt;p&gt;The best way to keep costs down is to make sure that the requested access to the production-like dataset is limited in time. For example, a software engineer might only need to run the tests for a day or a week. At the end of that time, the data should automatically be expired and the instance removed because it is no longer needed. If the data is still needed after the initial period of time has expired, you can implement a way to extend the deadline as appropriate.&lt;/p&gt;

&lt;p&gt;Another way to reduce costs is to use Copy on Write (COW) technologies if they are available for your database engine and cloud provider. The way this works is that the Golden Copy holds most of the data in storage for use, while the clones that are handed out to engineers are sharing most of the data with the original. It is only when a change or update is made to a table or row that the data is  “copied” over for the clone to use. This is what a Copy on Write will do: it means that the only additional costs for storage on the clone are the incremental changes or writes that are executed during testing.&lt;/p&gt;

&lt;p&gt;Another good way to reduce costs is to pause or stop a database when it is not in use. Depending on your cloud provider, you may be able to execute a pause or stop on the database instance so that you can save money during the evenings or weekends when the database is not likely to be in use. This can save 30-60% off your costs versus running the database 24/7.&lt;/p&gt;

&lt;p&gt;The good news is that &lt;a href="https://docs.release.com/"&gt;Release&lt;/a&gt; offers all of the features, including cloning snapshots from the Golden Copy, pausing and restarting databases on a schedule and expiring them as well, and using COW to save time, money, and storage. We support all of the above (using &lt;a href="https://docs.release.com/frequently-asked-questions/aws-support-faqs"&gt;AWS&lt;/a&gt;, GCP and soon Azure) and we can easily build a dataset pipeline that is checked out to each &lt;a href="https://www.merriam-webster.com/dictionary/ephemeral"&gt;ephemeral&lt;/a&gt; environment where your code is deployed.&lt;/p&gt;

&lt;p&gt;You can refresh the dataset to get new changes from the Golden Copy (which is a cleaned version of production, or directly from production as you wish), and you can also update the data in your ephemeral environment to start over from scratch with a new database. You can also set an expiration for each ephemeral environment that will last as long as the branch or feature pull request is open, and engineers can extend the environment duration as needed. Lastly, you can set a schedule for pausing the databases in the dataset so that you save additional costs when the environments are unlikely to be used.&lt;/p&gt;

&lt;p&gt;Ready to learn more? Watch the on-demand webinar “&lt;a href="https://www.youtube.com/watch?v=TAhz_UxPWG4"&gt;Full Fidelity Data for Ephemeral Environments&lt;/a&gt;.”&lt;/p&gt;

&lt;p&gt;About Release&lt;br&gt;
&lt;a href="https://release.com"&gt;Release&lt;/a&gt; is the simplest way to spin up even the most complicated environments. We specialize in taking your complicated application and data and making reproducible environments on-demand.&lt;/p&gt;

</description>
      <category>database</category>
      <category>testing</category>
      <category>devops</category>
      <category>automation</category>
    </item>
    <item>
      <title>How to Self-Host PostHog Using Release Part One</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Wed, 15 Feb 2023 19:58:38 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/how-to-self-host-posthog-using-release-part-one-c9m</link>
      <guid>https://dev.to/rwilsonrelease/how-to-self-host-posthog-using-release-part-one-c9m</guid>
      <description>&lt;p&gt;This first part of a series explains how to run a hobby version of &lt;a href="https://github.com/PostHog/posthog" rel="noopener noreferrer"&gt;PostHog&lt;/a&gt; on your own cloud infrastructure using Release. Check back later to read how to perform the self-hosted enterprise version. You can read the &lt;a href="https://posthog.com/faq" rel="noopener noreferrer"&gt;PostHog FAQ&lt;/a&gt; for more details on the software and self-hosting options for hobby and enterprise options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;PostHog is an open source tool for collecting and analyzing behavior metrics and events from your application without sending data to a third party hosting provider. As the GitHub repository says, &lt;em&gt;“…third-party analytics tools do not work in a world of cookie deprecation, GDPR, HIPAA, CCPA, and many other four-letter acronyms. PostHog is the alternative to sending all of your customers' personal information and usage data to third-parties.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This goes together with the Release philosophy of deploying high-quality, full-stack environments in your own cloud accounts. Deploying PostHog next to your development environments for complete testing and development, or as a shared staging location for integration testing is a valuable component for developing and testing user interactions and metrics collection that would be much harder to support without Release.&lt;/p&gt;

&lt;p&gt;This article will walk you through the simple steps for configuring a PostHog application template that you can deploy to your own cloud infrastructure using Release in about 30 minutes or so. The hobby version is a full functionality installation of PostHog but will not be configured with redundant services and long-term storage and backup solutions like the enterprise version would. The most common use-case for wanting to install the hobby version is to support developer environments for testing in isolation or for QA environments for complete end-to-end testing of product analytics.&lt;/p&gt;

&lt;p&gt;We will cover the enterprise version for permanent installations (like staging or production) in a followup post. The most common use-case to install the enterprise version is to self-host and scale your own analytics engine and data from your product customers &lt;em&gt;without sending the data outside your organization or to a third-party Software-as-a-Service (SAAS).&lt;/em&gt; Release allows you to self-host applications in your own cloud environments keeping your and your customers’ data safe and secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To get started, open a &lt;a href="https://app.release.com/" rel="noopener noreferrer"&gt;Release account&lt;/a&gt; which allows you to have unlimited applications and pay only for the environments you are running. You can test out Release by creating a trial account in a shared environment; or if you already have a paid plan, you can use your Release account to host these environments in your own cloud infrastructure next to your existing applications. See our &lt;a href="https://docs.release.com/getting-started/quickstart" rel="noopener noreferrer"&gt;quickstart documentation&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;Next, fork the &lt;a href="https://github.com/PostHog/posthog" rel="noopener noreferrer"&gt;PostHog GitHub repo&lt;/a&gt; using either &lt;a href="https://github.com/releasehub-samples/posthog" rel="noopener noreferrer"&gt;our fork with the configuration options already built in&lt;/a&gt;, or the original upstream repository. You can also use one of our integrations with GitLab or BitBucket but you will need to clone and push the repository separately before you get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure the PostHog Application
&lt;/h2&gt;

&lt;p&gt;The first step is to import the application into Release by analyzing the repository and reading in the docker-compose and package.json files to get a running version of the repository in your account. If you are using the fork from our repository, then you will have a head start by using our configuration YAML already integrated into the repository. These instructions will work for the plain upstream version and they show how the process works.&lt;/p&gt;

&lt;p&gt;First, click the “Create New App” button and select your forked repository (note: if the repository does not show up, go to your profile page and make sure you have configured the correct permissions and scope for the version control system integration you are using). Give the application a name and go to the next step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh06fwi96o84totprk0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh06fwi96o84totprk0x.png" alt="Create your application in Release" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, analyze the repository using a branch and the common hint files we search for to configure this application. Select at least one docker-compose file and then select the services to your preference as shown in the following image. The hobby version will not include any helm charts or cloud native service integrations, but this is a good example and low-cost way to test out the functionality of the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu62ohhm33i9tlm5iidmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu62ohhm33i9tlm5iidmm.png" alt="Analyze the repository" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you’ve selected the services to import and analyze, you will fine tune the application template so that it has the specific customizations and workflows you would like to use for testing purposes. You can start by editing the hostnames for each service (you will only need a service hostname for &lt;code&gt;web&lt;/code&gt;, &lt;code&gt;maildev&lt;/code&gt;, and &lt;code&gt;clickhouse&lt;/code&gt; for example). You can also customize the domain you will deploy the test application URLs to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlbmdz2vm6sji6cp0znw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlbmdz2vm6sji6cp0znw.png" alt="Generate a template" width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should pay close attention to the workflow steps as the order of the services is important. Reading up on the architecture diagram and understanding the depends_on fields in the docker compose will help you understand the correct workflow as well. This is all organized for you in the &lt;a href="https://github.com/releasehub-samples/posthog" rel="noopener noreferrer"&gt;forked version we host and maintain&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26t3ze7mpix38ld7fwte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26t3ze7mpix38ld7fwte.png" alt="Workflow section" width="474" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you are happy with the application template and service definitions, you can proceed with fine-tuning the environment variables for the application. Here, the &lt;a href="https://posthog.com/docs/self-host/configure/environment-variables" rel="noopener noreferrer"&gt;PostHog documentation&lt;/a&gt; is excellent in helping you compare the defaults and necessary customization you will need side-by-side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffght8ux6t8as7wv5hpc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffght8ux6t8as7wv5hpc3.png" alt=" " width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that I’ve set some environment variable mappings which allow interpolation of Release-injected variables so that the application URLs and internal configuration items work correctly for each environment that will be created. Since we are using a hobby (or ephemeral) version of this application, we are not going to hardcode values or secrets as we might do in a permanent or production environment.&lt;/p&gt;

&lt;p&gt;The next steps involve creating build arguments. We always recommend using the &lt;code&gt;production&lt;/code&gt; value for &lt;code&gt;NODE_ENV&lt;/code&gt; since Release ephemeral environments run in production mode, rather than development mode (except when using our &lt;a href="https://docs.release.com/cli/remote-dev" rel="noopener noreferrer"&gt;Remote Development feature&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj4uinilxh7fjp5xcp7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj4uinilxh7fjp5xcp7k.png" alt="Add build arguments" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When those steps are done, you can go back to review all your work and confirm it looks good or you can immediately deploy your environment and see the results of your hard work! The application template and environment variables will be deployed into your cloud environment in a completely isolated environment that you can test and play with. When you are done with testing the environment, it can be expired, removed, or you can redeploy it with any changes and tweaks you need to make until you are satisfied with the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1sre9p20r84zlq101xe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1sre9p20r84zlq101xe.png" alt="Save and deploy" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy the Application as an Environment
&lt;/h2&gt;

&lt;p&gt;You should see your deploy starting up and an ephemeral environment is immediately spun up to begin playing with!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwih3u30jzq2diharbpm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwih3u30jzq2diharbpm.png" alt="Build progress" width="800" height="527"&gt;&lt;/a&gt;‍&lt;br&gt;
‍&lt;br&gt;
Once  complete, you can visit the environment status page which shows you a list of clickable URL links for your application (we recommend editing those down to services that you actually need, for example, &lt;code&gt;redis&lt;/code&gt; and &lt;code&gt;postgres&lt;/code&gt; are not going to be reachable on the internet and you should remove those from your configuration).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpx8xgktlzb3kqbfpubsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpx8xgktlzb3kqbfpubsh.png" alt="Environment overview" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Debug and Test the Environment
&lt;/h2&gt;

&lt;p&gt;You can inspect the services and statuses of each underlying instance that is running in your kubernetes namespace. This gives you the up-to-date picture of what is started and what is running properly, what needs to be investigated for errors, logs, and so-forth. Alternatively, you can jump into a terminal session on a service for immediate debugging and testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfx3iopvn9u4mgibp8e0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfx3iopvn9u4mgibp8e0.png" alt="Environment details" width="800" height="752"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on each service link, you can see &lt;code&gt;Clickhouse&lt;/code&gt; works&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9b17uk5upuemw7cs2pz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9b17uk5upuemw7cs2pz.png" alt="Clickhouse OK response" width="105" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the same with &lt;code&gt;maildev&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd258q0buizq5cbzycn08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd258q0buizq5cbzycn08.png" alt="maildev response" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Login and Configure the Application
&lt;/h2&gt;

&lt;p&gt;But the most important thing to look at is the PostHog application itself. We will click on “Just experimenting” for now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7wqmhal30vas6ugcud7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7wqmhal30vas6ugcud7.png" alt="PostHog welcome page" width="636" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can verify that all the services and prerequisites are running:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9knf8qx5u9d8tglah99w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9knf8qx5u9d8tglah99w.png" alt="PostHog service validation" width="535" height="1061"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then get started by creating an account:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlthz4jykx6a6es84fap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlthz4jykx6a6es84fap.png" alt="Create your PostHog application account" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally you can start configuring your application and settings to use in your new isolated hobby experience:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmni8q4zxoj2wn8zafp9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmni8q4zxoj2wn8zafp9m.png" alt="Welcome to PostHog" width="800" height="877"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Using Release to deploy a hobby version of PostHog allows you to run an isolated version of analytics and metrics collection during the development and testing of your own application code. PostHog can be deployed alongside every application deployment you use for each developer or for each branch/feature/pull request that needs to integrate with PostHog, depending on your workflow and requirements.&lt;/p&gt;

&lt;p&gt;With Release, it is easy to spin up environments for almost any conceivable use case and this empowers developer teams to deploy applications without relying on a devops or an infrastructure team and minimizing costs associated with provisioning and maintaining the application. You can deploy dependencies or third-party tools that need to be tested or verified when performing application changes or when rolling out new features. Verifying this functionality with third party tools and services is a valuable way to ensure that features reaching staging or production will work even during the earliest phases of development.&lt;/p&gt;

&lt;p&gt;Ready to get started? Open a &lt;a href="https://app.release.com/" rel="noopener noreferrer"&gt;Release account&lt;/a&gt; and start your free trial today or check out our &lt;a href="https://docs.release.com/getting-started/quickstart" rel="noopener noreferrer"&gt;quickstart documentation&lt;/a&gt; for additional information.&lt;/p&gt;

</description>
      <category>codenewbie</category>
      <category>sideprojects</category>
      <category>community</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Kubernetes Pod: A Beginner's Guide to an Essential Resource</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Tue, 13 Sep 2022 20:11:27 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/kubernetes-pod-a-beginners-guide-to-an-essential-resource-1p0b</link>
      <guid>https://dev.to/rwilsonrelease/kubernetes-pod-a-beginners-guide-to-an-essential-resource-1p0b</guid>
      <description>&lt;p&gt;Kubernetes is a complex tool, but taking your first steps is relatively easy. This is especially true today when all major cloud providers offer easy one-click creation of Kubernetes clusters; you can have a fully working Kubernetes cluster in a matter of minutes. So, what do you do then? You'll probably deploy some pods. Pods are arguably the most important Kubernetes resources. You may have heard about them already, since deploying pods is usually one of the first things in any Kubernetes tutorial. You may have even heard "they're kind of like containers." In this post, you'll learn everything you need to know about pods. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4J80YjCL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/anpj6t705qlb73p9of58.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4J80YjCL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/anpj6t705qlb73p9of58.jpeg" alt="A picture containing vegetable, green, pea, edible-pod pea" width="880" height="881"&gt;&lt;/a&gt;‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Pods 101
&lt;/h3&gt;

&lt;p&gt;Before Kubernetes, everyone was talking about containers. When you wanted to deploy only one small microservice, you'd say that you needed to deploy "one container." On Kubernetes, everyone talks about pods instead. So, when you only want to deploy one microservice, you'll say that you need to deploy one pod. &lt;/p&gt;

&lt;p&gt;Are pods the same as containers, then? Well, not really. A pod is the smallest deployable unit in a Kubernetes world. This means that you can't directly deploy a single container in Kubernetes. If you want one container running, you need to package it into a pod and deploy one pod. A pod can also contain more than one container. It's basically like a box for containers. &lt;/p&gt;

&lt;p&gt;Long story short: if you mainly deploy single containers, there isn't much difference between a pod and a container. Technically, a pod encapsulates your container, but in general you can treat it similarly to a container. But pods' ability to contain more than one container is what opens doors of possibilities. We'll dive into that later in this post. But before that, let's talk about pod lifecycles. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LiTV-2Wk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24purb333c9dh927ldv3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LiTV-2Wk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24purb333c9dh927ldv3.png" alt="Emphasised quote: &amp;quot;pods' ability to contain more than one container is what opens doors of possibilities&amp;quot;" width="880" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pod Lifecycles
&lt;/h3&gt;

&lt;p&gt;Just like many other resources Kubernetes pods can be in a pending, running, or succeeded/failed state. You can check the status of your pod by executing &lt;code&gt;kubectl describe pod [your_pod_name]&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe pod nginx-deployment-6595874d85-hnjzw
Name:           nginx-deployment-6595874d85-hnjzw
Namespace:      default
Priority:       0
Node:           k3s-worker3/10.133.106.222
Start Time:     Sun, 21 Aug 2022 12:24:58 +0200
Labels:         app=nginx
                pod-template-hash=6595874d85
Annotations:
Status:         Pending
(...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the snippet above, my pod is in a &lt;code&gt;Pending&lt;/code&gt; state. So, what do these states mean? &lt;/p&gt;

&lt;h4&gt;
  
  
  Pending
&lt;/h4&gt;

&lt;p&gt;Pending, as the name suggests, means that the pod is waiting for something. Usually, it means that Kubernetes is trying to determine where to deploy that pod. So, in normal circumstances, you'll see your pod in the pending state for the first few seconds after creation. But it may also stay in a pending state longer if, for example, all your nodes are full and Kubernetes can't find a suitable node for your pending pod. In such a case, your pod will stay in a pending state until some other pods finish and free up resources or until you add another node to your cluster. &lt;/p&gt;

&lt;h4&gt;
  
  
  Running
&lt;/h4&gt;

&lt;p&gt;Running is pretty straightforward: It's when everything is working correctly and your pod is active. There is a small caveat to this, though. If your pod consists of multiple containers, then your pod will be in the status "running" if at least one of its primary containers starts successfully. This means there's a chance that your pod will be in a running state even though not all containers are actually running. So, in the case of multiple containers, it's always best to double-check individual container states to be sure. &lt;/p&gt;

&lt;h4&gt;
  
  
  Succeeded/Failed
&lt;/h4&gt;

&lt;p&gt;Succeeded or failed is what comes after running. As you can imagine, you'll see "succeeded" when your pod did its job and finished as expected, and you'll see "failed" when your pod terminated due to some error. And again, in the case of multiple containers in one pod, you need to be aware that your pod will end up in a failed state if at least one of the containers ends up having issues. &lt;/p&gt;

&lt;h4&gt;
  
  
  Unknown
&lt;/h4&gt;

&lt;p&gt;The other phase a pod can be in is called "unknown," and you probably won't see it often. A pod will be in a state unknown when Kubernetes literally doesn't know what's happening with the pod. This is usually due to networking issues between the Kubernetes control plane and the node on which the pod suppose to run.   &lt;/p&gt;

&lt;h3&gt;
  
  
  What Are Pods Used for?
&lt;/h3&gt;

&lt;p&gt;Now, the big question: What are pods actually used for? The simple answer would be "to run your application." At the end of the day, the point of running Kubernetes is to run containerized applications on it. And pods are the actual resources that make it possible. They encapsulate your containerized application and allow you to run it on your Kubernetes cluster. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wnPRFzcd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84fktc4tb4hvv546r7bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wnPRFzcd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84fktc4tb4hvv546r7bx.png" alt="Emphasised quote: &amp;quot;'What are pods actually used for?' The simple answer would be 'to run your application.'&amp;quot;" width="880" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, it's worth mentioning that usually you won't actually be deploying pods themselves. You'll be using other, higher-level Kubernetes resources like &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/"&gt;Deployments&lt;/a&gt; or &lt;a href="https://releasehub.com/blog/kubernetes-daemonset-tutorial"&gt;DaemonSets&lt;/a&gt; that will create pods for you. &lt;/p&gt;

&lt;h3&gt;
  
  
  Pods vs. Other Resources
&lt;/h3&gt;

&lt;p&gt;Pods are only one of many Kubernetes resource types. Most other types are directly or indirectly related to pods, because as we already said, pods are resources that will actually be running your application on the cluster. Therefore, pretty much anything that your application may need—be it a secret or storage or a load balancer—will all need to somehow relate or connect to a pod. &lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;Kubernetes secrets can be consumed by pods. Kubernetes service resources used to expose a containerized application on your cluster to the network or internet need to reference a pod. Volumes in Kubernetes are mounted to pods. Kubernetes ConfigMaps used to store configuration files are loaded to pods. These are just a few examples, but in general, pods are usually at the center of everything that's happening on Kubernetes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fzSWp3bJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kqzcaiwakydlgw02wtqy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fzSWp3bJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kqzcaiwakydlgw02wtqy.jpeg" alt="A picture containing pea, vegetable, edible-pod pea" width="880" height="880"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Create A Pod
&lt;/h3&gt;

&lt;p&gt;I'll show you how to create a pod, but be aware that normally you wouldn't create pods directly. You should use higher-level resources like Deployments that will take care of creating pods for you. But if you ever need it for testing or learning purposes, you can create a pod with the following YAML definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod-example
spec:
  containers:
    - name: nginx
      image: nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can apply it just like any other Kubernetes YAML definition, using kubectl apply -f:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f pod.yaml
pod/nginx-pod-example created

$ kubectl get pod nginx-pod-example
NAME                READY   STATUS    RESTARTS   AGE
nginx-pod-example   1/1     Running   0          6s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Pods With Multiple Containers
&lt;/h3&gt;

&lt;p&gt;We mentioned pods with multiple containers already, so let's dive into that a bit more. The first thing for you to know is that pods' ability to run multiple containers is not something you should overuse. For example, it's not meant to be used to combine front-end and back-end microservices into one pod. Quite the opposite; you actually shouldn't combine multiple functional microservices into one pod. &lt;/p&gt;

&lt;p&gt;‍Why does Kubernetes give you that option then? Well, it's for a different purpose. Putting more than one container into a single pod is useful for adding containers that are like assistants or helpers to your main container. A common example is log gathering containers. Their only job is to read logs from your main container and forward it (usually to some centralized log management solution). Another example is secret management containers. Their job is to securely load secrets from some secret vault and securely pass it to your main container. &lt;/p&gt;

&lt;p&gt;‍As you can see, multiple containers in a pod are typically used in the main container + secondary containers configuration. We call these secondary containers "sidecar containers." &lt;/p&gt;

&lt;p&gt;‍Of course, even though it's not usually recommended, there's nothing stopping you from combining two containers into one pod. If you have a very specific use case and you think it would make sense in your case, you can add more containers to your pod. You just need to be aware of the consequences of such an approach. The main one is that, in the case of the failure of the pod, both containers will die. &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;As you can see, pods are pretty straightforward resources. In most cases, you can treat them the same as containers, but they do offer extra sidecar functionality when necessary. &lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;Learned all you need to for pod basics? Read on to our advanced pod concepts article &lt;a href="https://releasehub.com/blog/kubernetes-pods-advanced-concepts-explained"&gt;here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containerapps</category>
      <category>sre</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Edit a File in a Docker Container</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Thu, 08 Sep 2022 17:01:46 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/how-to-edit-a-file-in-a-docker-container-5fk6</link>
      <guid>https://dev.to/rwilsonrelease/how-to-edit-a-file-in-a-docker-container-5fk6</guid>
      <description>&lt;p&gt;You want to edit a file in your Docker container, but you’ve run into an error that leaves you with none of the tools you need to make your changes. Now what?&lt;/p&gt;

&lt;p&gt;Docker intentionally keeps containers as lean as possible with no unnecessary packages installed to maximize performance and stability. Unfortunately, this also means Docker containers don’t have a file editor like Vim or Nano preinstalled.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to install an editor, make the changes you need to, and return the container to its original state, both from the command line and using the Docker extension inside VS Code.&lt;/p&gt;

&lt;p&gt;First, though, some housekeeping. It’s considered bad practice to edit Docker files currently running in a production environment, and, once you’ve made your change, you should remove any packages you installed to do so (the editor, for example).&lt;/p&gt;

&lt;p&gt;Here’s our step-by-step guide to editing a file in Docker.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1: Edit from the command line
&lt;/h3&gt;

&lt;h4&gt;
  
  
  #1 Log in to your container
&lt;/h4&gt;

&lt;p&gt;If your container is not already running, run the container with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --name &amp;lt;yourcontainername&amp;gt; -d -t
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check all your running containers, you can use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should be met with something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ruuf0a71tffkzj9pq6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ruuf0a71tffkzj9pq6f.png" alt="The console showing the output of docker ps listing the container ID and other information."&gt;&lt;/a&gt;&lt;br&gt;
‍&lt;br&gt;
This list indicates your target container is up and running. Note that every container has a discrete ID, which we’ll need to gain root access to the container.&lt;/p&gt;

&lt;p&gt;To gain root access to the container, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it &amp;lt;container-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something like this:&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlk4owi83nip80cq8p08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlk4owi83nip80cq8p08.png" alt="A prompt showing that root login has succeeded."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, &lt;code&gt;root@&amp;lt;container-id&amp;gt;:/#&lt;/code&gt; indicates we now have root access to the container.&lt;/p&gt;
&lt;h4&gt;
  
  
  #2 Install the editor
&lt;/h4&gt;

&lt;p&gt;It’s a good idea to update your package manager before you install the editor. This ensures that you install the latest stable release of the editor. On Ubuntu, that command is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install your preferred editor, such as Vim, Nano or GNU Emacs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install &amp;lt;your package manager&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, to install Vim:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install vim
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  #3 Edit the File
&lt;/h4&gt;

&lt;p&gt;To edit the file, ensure you are in the appropriate directory and use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim yourfilename.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you’ve made the edit to the file, you can remove the editor (in our case, Vim) like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get remove vim
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get purge vim

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command “remove” will remove only Vim, and no other config files or dependencies involved in the initial install. The command “purge” will remove all config files associated with Vim. In the interest of leaving no trace, the purge command is probably appropriate in this case.&lt;/p&gt;

&lt;p&gt;Your package manager may change depending on your OS. These commands are associated with Ubuntu and Vim.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persisting an editor for regular changes
&lt;/h3&gt;

&lt;p&gt;The above steps are useful for one-off changes, but if you need to make changes often – in a development environment, for example – it’s best to add your editor to your &lt;code&gt;Dockerfile&lt;/code&gt;. This will ensure your chosen editor is always available whenever you spin up another instance of your container.&lt;/p&gt;

&lt;p&gt;Add your editor to the &lt;code&gt;Dockerfile&lt;/code&gt; like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN["apt-get", "update"]
RUN["apt-get", "install", "vim"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every image created with that &lt;code&gt;Dockerfile&lt;/code&gt; will have Vim pre-installed and ready to go.&lt;/p&gt;

&lt;p&gt;You can replace “Vim” with your editor of choice, such as Nano or GNU Emacs. Keep in mind that the commands in the square brackets are specific to Ubuntu Linux. You may need to adapt these to the operating system you are running in your Docker container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2: Edit from VS Code
&lt;/h3&gt;

&lt;p&gt;If you prefer to use a GUI editor (for example, if you’d like to use your mouse to navigate through large files, or cut and paste text), you can use VS Code.&lt;/p&gt;

&lt;p&gt;This option requires both the Visual Studio Code IDE and the Docker extension from Microsoft. To install the extension, navigate to the extensions tab in VS Code and type in “Docker”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5lilp1mvl436plxc1h0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5lilp1mvl436plxc1h0.png" alt="Search results in VS Code extensions for docker."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Be sure to select the Docker extension from Microsoft. This extension allows you to easily manage any containers on your system directly from its UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wf9840y8jhrajqpu3ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wf9840y8jhrajqpu3ae.png" alt="VS Code displaying a file inside the docker container."&gt;&lt;/a&gt;‍&lt;/p&gt;

&lt;p&gt;From here, treating a container like any file directory, you can navigate to and open files in that container, and make your changes right in VS Code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Closing remarks
&lt;/h3&gt;

&lt;p&gt;Now that you know how to edit files in a Docker file, it’s important to take note of the best practice for it.&lt;/p&gt;

&lt;p&gt;Editing files in a running Docker container is only recommended when working in a development environment during conceptualisation and when building proof-of-concepts.&lt;/p&gt;

&lt;p&gt;Once you’ve made changes to your project in Docker containers, save a new image with those changes in place. This leaves flexibility for testing two containers comparatively while ensuring stability and consistency across containers.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>remotedev</category>
      <category>containerapps</category>
      <category>ide</category>
    </item>
    <item>
      <title>Terraform Kubernetes Deployment: A Detailed Walkthrough</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Tue, 30 Aug 2022 17:18:55 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/terraform-kubernetes-deployment-a-detailed-walkthrough-42ee</link>
      <guid>https://dev.to/rwilsonrelease/terraform-kubernetes-deployment-a-detailed-walkthrough-42ee</guid>
      <description>&lt;p&gt;Terraform and Kubernetes are two of the most popular tools in their categories. Terraform is widely adopted as the tool of choice for infrastructure as code, and Kubernetes is number one when it comes to orchestrating containers. Is it possible to combine both? Sure! You can use Terraform to deploy your Kubernetes clusters. It's actually quite common, and it lets you deploy Kubernetes just like the rest of your infrastructure. In this post, you'll learn how to do it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform + Kubernetes: How and Why?
&lt;/h3&gt;

&lt;p&gt;We have two main questions to answer here. How can you deploy Kubernetes with Terraform, and why would you do that? Let's start with the latter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sEX8v_ka--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zrso0fz5m4uzgg3ulpn4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sEX8v_ka--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zrso0fz5m4uzgg3ulpn4.png" alt="Image description" width="880" height="176"&gt;&lt;/a&gt;&lt;br&gt;
‍&lt;br&gt;
The answer doesn't differ from "Why would you deploy anything with Terraform?" From that perspective, there's nothing special about Kubernetes, and you get the same benefit by using Terraform to deploy it as with any other infrastructure. You get automation, infrastructure versioning, reliability, and even the ability to perform infrastructure security scanning.&lt;/p&gt;

&lt;p&gt;As for how, the answer is actually similar. You can deploy Kubernetes with Terraform just like any other infrastructure. Meaning, you first need to find a Kubernetes resource definition for Terraform (we'll show you that shortly), adjust some parameters for your needs, add it to your Terraform code, and you're done. And just like with any other resource, Terraform will be able to track changes to your cluster and update its configuration after you make changes to the code.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploying Kubernetes: First Steps
&lt;/h3&gt;

&lt;p&gt;Enough theory. Let's see how it works in practice. First, you need to find a Terraform provider for your cloud. If you want to deploy Kubernetes on &lt;a href="https://www.digitalocean.com/"&gt;DigitalOcean&lt;/a&gt;, you'd need to follow &lt;a href="https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs/resources/kubernetes_cluster"&gt;this documentation&lt;/a&gt;. For &lt;a href="https://azure.microsoft.com/en-us/"&gt;Microsoft Azure&lt;/a&gt;, you'd need to head &lt;a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster"&gt;here for details&lt;/a&gt;. And for &lt;a href="https://cloud.google.com/"&gt;Google Cloud&lt;/a&gt;, you need to &lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster"&gt;check here&lt;/a&gt;. These are just a few examples. But no matter which cloud provider you're using, the general approach will be the same. For today's example, we'll use DigitalOcean.&lt;/p&gt;

&lt;p&gt;To start from nothing, in the simplest scenario, you need to create two files named &lt;code&gt;provider.tf&lt;/code&gt; and &lt;code&gt;main.tf&lt;/code&gt;. You could do it all in one file, but it's a good practice to separate providers and main resource definitions. In the code below, you can define your DigitalOcean provider for Terraform and pass your DigitalOcean token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;digitalocean&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"digitalocean/digitalocean"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 2.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"do_token"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"[replace_with_your_token]"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;# Configure the DigitalOcean Provider&lt;/span&gt;
&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"digitalocean"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;token&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;do_token&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;main.tf&lt;/code&gt; you can now define your Kubernetes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"digitalocean_kubernetes_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"test"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test_cluster"&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nyc1"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.22.11-do.0"&lt;/span&gt;
  &lt;span class="nx"&gt;node_pool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"worker-pool"&lt;/span&gt;
    &lt;span class="nx"&gt;size&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s-2vcpu-2gb"&lt;/span&gt;
    &lt;span class="nx"&gt;node_count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that you have your Terraform files prepared, you need three things. First, you need to initiate the DigitalOcean provider. You can do that with &lt;code&gt;terraform init&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# terraform init
Initializing the backend...
Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~&amp;gt; 2.0"...
- Installing digitalocean/digitalocean v2.21.0...
- Installed digitalocean/digitalocean v2.21.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can run your &lt;code&gt;terraform plan&lt;/code&gt;, which will show you planned changes to the infrastructure (which in this case should be creating a new Kubernetes cluster).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create
Terraform will perform the following actions:
  # digitalocean_kubernetes_cluster.test will be created
  + resource "digitalocean_kubernetes_cluster" "test" {
      + cluster_subnet = (known after apply)
      + created_at     = (known after apply)
      + endpoint       = (known after apply)
      + ha             = false
      + id             = (known after apply)
      + ipv4_address   = (known after apply)
      + kube_config    = (sensitive value)
      + name           = "test-cluster"
      + region         = "nyc1"
      + service_subnet = (known after apply)
      + status         = (known after apply)
      + surge_upgrade  = true
      + updated_at     = (known after apply)
      + urn            = (known after apply)
      + version        = "1.22.11-do.0"
      + vpc_uuid       = (known after apply)
      + maintenance_policy {
          + day        = (known after apply)
          + duration   = (known after apply)
          + start_time = (known after apply)
        }
      + node_pool {
          + actual_node_count = (known after apply)
          + auto_scale        = false
          + id                = (known after apply)
          + name              = "worker-pool"
          + node_count        = 3
          + nodes             = (known after apply)
          + size              = "s-2vcpu-2gb"
        }
    }
Plan: 1 to add, 0 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The plan looks good. One resource will be added, and that's your Kubernetes cluster, so you can go ahead and apply the changes with &lt;code&gt;terraform apply&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# terraform apply
(...)
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes
digitalocean_kubernetes_cluster.test: Creating...
digitalocean_kubernetes_cluster.test: Still creating... [10s elapsed]
(...)
digitalocean_kubernetes_cluster.test: Still creating... [7m10s elapsed]
digitalocean_kubernetes_cluster.test: Creation complete after 7m16s [id=49fd0517-a4a5-41e8-997d-1412c081e000]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you now head to your DigitalOcean portal to validate, you can indeed see it there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4GDoncEa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2w4o7uagdy7n1yfelgeh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4GDoncEa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2w4o7uagdy7n1yfelgeh.png" alt="Image description" width="880" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! That's how you deploy Kubernetes with Terraform.&lt;/p&gt;

&lt;p&gt;‍### Deploying Kubernetes: Next Steps&lt;br&gt;
Now that you know how it works in general, there are a few things that you need to learn next. First, all you've done is deploy basic, minimal Kubernetes. In more realistic scenarios, you'll probably want to parametrize more options for your Kubernetes. This, however, will highly depend on what you actually need. If you know what you need, you can head to the Terraform documentation and check &lt;a href="https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs/resources/kubernetes_cluster#argument-reference"&gt;argument reference&lt;/a&gt; for your Kubernetes resource. Find what you need and add it to your code.&lt;/p&gt;

&lt;p&gt;For example, if you'd like your Kubernetes cluster to automatically upgrade, you can find the following in the documentation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LaUsY8eq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbirydxpkh79z8znc33k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LaUsY8eq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbirydxpkh79z8znc33k.png" alt="Image description" width="880" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make your freshly deployed cluster automatically upgrade, you just need to add the following to your Kubernetes resource definition in &lt;code&gt;main.tf&lt;/code&gt; as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"digitalocean_kubernetes_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"test"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test-cluster"&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nyc1"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.22.11-do.0"&lt;/span&gt;
  &lt;span class="nx"&gt;auto_upgrade&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;node_pool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"worker-pool"&lt;/span&gt;
    &lt;span class="nx"&gt;size&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s-2vcpu-2gb"&lt;/span&gt;
    &lt;span class="nx"&gt;node_count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But you're not there yet. You can quickly see in the DigitalOcean portal that the cluster currently does not automatically upgrade.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4R18LOhB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/364yibuffu9z9m2yfgzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4R18LOhB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/364yibuffu9z9m2yfgzi.png" alt="Image description" width="880" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Automatic upgrades are disabled now, so you can run &lt;code&gt;terraform plan&lt;/code&gt; again to check what Terraform will try to do.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place
Terraform will perform the following actions:
  # digitalocean_kubernetes_cluster.test will be updated in-place
  ~ resource "digitalocean_kubernetes_cluster" "test" {
      ~ auto_upgrade   = false -&amp;gt; true
        id             = "49fd0517-a4a5-41e8-997d-1412c081e000"
        name           = "test-cluster"
        tags           = []
        # (13 unchanged attributes hidden)
        # (2 unchanged blocks hidden)
    }
Plan: 0 to add, 1 to change, 0 to destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As expected, Terraform will now try to update your cluster in place and add an auto-upgrade option to it. Let's go ahead and apply that change.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# terraform apply
(...)
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes
digitalocean_kubernetes_cluster.test: Modifying... [id=49fd0517-a4a5-41e8-997d-1412c081e000]
digitalocean_kubernetes_cluster.test: Modifications complete after 2s [id=49fd0517-a4a5-41e8-997d-1412c081e000]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The change was quickly applied to your cluster, and if you double check in the portal again, you can see that, indeed, the auto-upgrade option is now enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o1w99dl5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/23ehjiysme72tu6dvh87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o1w99dl5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/23ehjiysme72tu6dvh87.png" alt="Image description" width="880" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Destroying Kubernetes
&lt;/h3&gt;

&lt;p&gt;If you no longer want your Kubernetes cluster, you can destroy it just as easily as you deployed it. All you need to do is execute &lt;code&gt;terraform destroy&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# terraform destroy
digitalocean_kubernetes_cluster.test: Refreshing state... [id=49fd0517-a4a5-41e8-997d-1412c081e000]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  - destroy
Terraform will perform the following actions:
  # digitalocean_kubernetes_cluster.test will be destroyed
  - resource "digitalocean_kubernetes_cluster" "test" {
      - auto_upgrade   = true -&amp;gt; null
      - cluster_subnet = "10.244.0.0/16" -&amp;gt; null
      - created_at     = "2022-07-24 06:15:34 +0000 UTC" -&amp;gt; null
      - endpoint       = "https://49fd0517-a4a5-41e8-997d-1412c081e000.k8s.ondigitalocean.com" -&amp;gt; null
      - ha             = false -&amp;gt; null
      - id             = "49fd0517-a4a5-41e8-997d-1412c081e000" -&amp;gt; null
      - kube_config    = (sensitive value)
      - name           = "test-cluster" -&amp;gt; null
      - region         = "nyc1" -&amp;gt; null
      - service_subnet = "10.245.0.0/16" -&amp;gt; null
      - status         = "running" -&amp;gt; null
      - surge_upgrade  = true -&amp;gt; null
      - tags           = [] -&amp;gt; null
      - updated_at     = "2022-07-24 06:37:27 +0000 UTC" -&amp;gt; null
      - urn            = "do:kubernetes:49fd0517-a4a5-41e8-997d-1412c081e000" -&amp;gt; null
      - version        = "1.22.11-do.0" -&amp;gt; null
      - vpc_uuid       = "877cc187-97ad-426c-9301-079e3683d351" -&amp;gt; null
      - maintenance_policy {
          - day        = "any" -&amp;gt; null
          - duration   = "4h0m0s" -&amp;gt; null
          - start_time = "10:00" -&amp;gt; null
        }
      - node_pool {
          - actual_node_count = 3 -&amp;gt; null
          - auto_scale        = false -&amp;gt; null
          - id                = "8df9b48c-329d-41f5-899e-b7b896e28e15" -&amp;gt; null
          - labels            = {} -&amp;gt; null
          - max_nodes         = 0 -&amp;gt; null
          - min_nodes         = 0 -&amp;gt; null
          - name              = "worker-pool" -&amp;gt; null
          - node_count        = 3 -&amp;gt; null
          - nodes             = [
              - {
                  - created_at = "2022-07-24 06:15:34 +0000 UTC"
                  - droplet_id = "309670716"
                  - id         = "b82aeb19-78d8-4571-91e6-a0c2cffdb1db"
                  - name       = "worker-pool-c1766"
                  - status     = "running"
                  - updated_at = "2022-07-24 06:19:09 +0000 UTC"
                },
              - {
                  - created_at = "2022-07-24 06:15:34 +0000 UTC"
                  - droplet_id = "309670715"
                  - id         = "6b0d1ecf-4e48-427b-99a9-0e153056238d"
                  - name       = "worker-pool-c176t"
                  - status     = "running"
                  - updated_at = "2022-07-24 06:18:27 +0000 UTC"
                },
              - {
                  - created_at = "2022-07-24 06:15:34 +0000 UTC"
                  - droplet_id = "309670717"
                  - id         = "5ea0e536-96aa-4171-8602-dc0ab19e9888"
                  - name       = "worker-pool-c176l"
                  - status     = "running"
                  - updated_at = "2022-07-24 06:18:27 +0000 UTC"
                },
            ] -&amp;gt; null
          - size              = "s-2vcpu-2gb" -&amp;gt; null
          - tags              = [] -&amp;gt; null
        }
    }
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.
  Enter a value: yes
digitalocean_kubernetes_cluster.test: Destroying... [id=49fd0517-a4a5-41e8-997d-1412c081e000]
digitalocean_kubernetes_cluster.test: Destruction complete after 1s
Destroy complete! Resources: 1 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just like that, the cluster is gone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;And there you have it. That's how you can manage Kubernetes clusters with Terraform. You used DigitalOcean Kubernetes for this purpose, but as mentioned before, the process will be exactly the same for other providers. You'll just need to initiate different providers in &lt;code&gt;provider.tf&lt;/code&gt; and then adjust the Kubernetes resource definition in &lt;code&gt;main.tf&lt;/code&gt;. It's best to follow Terraform documentation for that. You'll find examples and argument references for major cloud providers.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;Managing infrastructure with Terraform definitely helps you save time, but did you know that you can also easily spin up an environment on &lt;a href="https://releasehub.com"&gt;ReleaseHub&lt;/a&gt; directly from your &lt;code&gt;docker-compose&lt;/code&gt; file? &lt;a href="https://releasehub.com/"&gt;Give it a shot here&lt;/a&gt;, and if you want to expand your Terraform knowledge further, take a look at our &lt;a href="https://releasehub.com/blog/terraforms-for-each-examples"&gt;post about for_each&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>6 Docker Compose Best Practices for Dev and Prod</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Fri, 19 Aug 2022 21:08:06 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/6-docker-compose-best-practices-for-dev-and-prod-3cm3</link>
      <guid>https://dev.to/rwilsonrelease/6-docker-compose-best-practices-for-dev-and-prod-3cm3</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Docker solves the "but it runs on my machine" problem by introducing containerization. However, with a multifaceted code base, you must simultaneously run several containers like the back and front end. Further, this will require you to leverage tools such as Docker Compose.&lt;/p&gt;

&lt;p&gt;Docker Compose is an excellent tool for optimizing the process of creating development, testing, staging, and production environments. With Docker Compose, you'll use a single file to build your environment instead of several files with complex scripts with a lot of branching logic. You can also share this single file with other developers on your team, making it easy to work from the same baseline environment.&lt;/p&gt;

&lt;p&gt;This post is about the best practices of Docker Compose for development and production.&lt;/p&gt;

&lt;h1&gt;
  
  
  What Is Docker Compose Good for?
&lt;/h1&gt;

&lt;p&gt;Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to bring up and link multiple containers into one logical unit. If you want to use Docker containers, you create one container that listens on an unused port on your machine. All other containers will connect to this server container on the same machine. Linking is where the different Docker services are connected and communicate with each other through a central node. This enables them to share data like configuration or databases.&lt;/p&gt;

&lt;p&gt;Docker Compose allows you to deploy your application's services as containers and lets you manage these containers as an organized and working whole in a single place—without having to worry about configuring your application's dependencies. For instance, if your app depends on three other services—like a database, an email server, and a messaging server—using Compose means you won't have to manage them individually.&lt;/p&gt;

&lt;p&gt;Instead, Docker handles that part for you so that all four services are available within one cohesive environment. This &lt;a href="https://releasehub.com/blog/cutting-build-time-in-half-docker-buildx-kubernetes" rel="noopener noreferrer"&gt;significantly reduces the time&lt;/a&gt; needed to get a service up and running. You can make changes simultaneously across all services. Therefore, Docker Compose is an excellent tool for building complex applications that utilize several services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ilt0at9duzoliysja3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ilt0at9duzoliysja3q.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
‍&lt;/p&gt;
&lt;h1&gt;
  
  
  Docker Compose Best Practices for Development
&lt;/h1&gt;

&lt;p&gt;During development, you may have the advantage of leveraging local storage, which is not the case during production. In production, resources like storage are costly; thus, you must carefully structure the &lt;code&gt;docker-compose&lt;/code&gt; file. Essentially, the configuration in development and production slightly differ, and the best practices also differ. Below are the best practices you should employ when using Docker Compose during development.&lt;/p&gt;
&lt;h1&gt;
  
  
  Mount Your Code as Volume to Avoid Unnecessary Rebuilds
&lt;/h1&gt;

&lt;p&gt;By mounting the project directory (current directory) on the host to code within the container using the &lt;code&gt;new volumes&lt;/code&gt; key, you may make changes to the code as you go without having to recompile the image. This also means you do not have to rebuild and push your image to change between development and production environments. You must delete or stop your local Docker machine and start it again.&lt;/p&gt;

&lt;p&gt;‍Note: You can also use a link in your code instead of the bind mount, but the downside to this approach is that you'll have to rebuild and repush your Docker image each time you change. With a bind mount, all you need to do is restart the container.&lt;/p&gt;
&lt;h1&gt;
  
  
  Use an Override File
&lt;/h1&gt;

&lt;p&gt;Some files are only necessary during development and not production. For example, developing a JavaScript application using any of its frameworks needs &lt;a href="https://webpack.js.org/" rel="noopener noreferrer"&gt;webpack&lt;/a&gt;. Thus, an override file will mimic the compose file but with webpack as a service. When you spin up the container, the compose files will bundle together during development. Therefore, when you make changes in the code base, you'll see the changes in real time. This allows you to have separate settings for the production and development environment by avoiding redundancy. Your &lt;code&gt;docker-compose.override.yml&lt;/code&gt; file will have the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;webpack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;."&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;webpack"&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;yarn&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Use YAML Anchors
&lt;/h1&gt;

&lt;p&gt;YAML anchors let you reuse parts of your YAML file-like functions. You can use them to share default settings between services. For example, let's say you want to create two services: &lt;code&gt;api&lt;/code&gt; and &lt;code&gt;web&lt;/code&gt;. Both of them will use Redis as a cache and a database, but the &lt;code&gt;web&lt;/code&gt; service will need additional volumes mounted. It would be cumbersome to set all these things in the web service's &lt;code&gt;docker-compose.yml&lt;/code&gt; file because it would have to duplicate those lines in the &lt;code&gt;api&lt;/code&gt; service's file.&lt;/p&gt;

&lt;p&gt;Using YAML anchors to share those settings, you can set the volumes section in the &lt;code&gt;web&lt;/code&gt; service's file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;x-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;default-app&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;."&lt;/span&gt;
    &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app"&lt;/span&gt;
  &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;redis"&lt;/span&gt;
  &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.env"&lt;/span&gt;
  &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${DOCKER_RESTART_POLICY:-unless-stopped}"&lt;/span&gt;
  &lt;span class="na"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*default-app&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8000:8000"&lt;/span&gt;
  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*default-app&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8000:5000"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, you may alter an aliased property by overriding it in a particular service. If you wanted to, you could set &lt;code&gt;port:5000&lt;/code&gt; in the example above to only the &lt;code&gt;web&lt;/code&gt; service. The alias will give it precedence. This method is beneficial when two services could share a Dockerfile and a code base but have some slight variations.&lt;/p&gt;

&lt;h1&gt;
  
  
  Docker Compose Best Practices for Production
&lt;/h1&gt;

&lt;p&gt;As mentioned, dev and production may have slight configuration differences. So now, let's look at some best practices to help your app be production ready.&lt;/p&gt;

&lt;h1&gt;
  
  
  Leverage the Docker Restart Policy
&lt;/h1&gt;

&lt;p&gt;Occasionally, you'll face a scenario when a service fails to start. A common reason is that another service on your host machine has changed, and Docker Compose uses the old environment variables. To ensure this doesn't happen, set the restart behavior to &lt;code&gt;restart: always&lt;/code&gt; and configure your services with &lt;code&gt;update_config: true&lt;/code&gt;. This will refresh the environment variables for each run. However, if your app relies on other services (MySQL, Redis, etc.) outside of Docker Compose, then you should take extra precautions. Make sure they are configured correctly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Correct Cleanup Order of Docker Images
&lt;/h1&gt;

&lt;p&gt;You need to clean up the order of your images during production. Do not use &lt;code&gt;docker rm -f&lt;/code&gt; as it may destroy useful images. Always run &lt;code&gt;docker rm -f --remove-orphans&lt;/code&gt;. If you're working in the dev stage, this is not an issue because Docker Compose builds images only once, then exposes them. Thus there's no need to worry about removing old images. However, in production, Docker loops through all images when the container stops and restarts.&lt;/p&gt;

&lt;p&gt;Consequently, there's no way for you to be sure that an image wasn't destroyed, even when &lt;code&gt;docker-compose down&lt;/code&gt; is called. If a container is stopped and restarted, then the exposed images can change, and you can't be sure they're still in use. Using &lt;code&gt;docker rm -f&lt;/code&gt; to delete containers is a mistake. Docker Compose reuses port bindings, so an old service is still available, even though its container was destroyed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyz6dwhlrthjvdiwc9we.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyz6dwhlrthjvdiwc9we.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since you cannot tell which containers might be potentially in use, you must delete all of them using the &lt;code&gt;--remove-orphans&lt;/code&gt; flag. If a container is restarted by Docker Compose (or something else) and it reuses the same port, the new image will have the same image ID as the old one.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;Notice we've added the &lt;code&gt;--remove-orphans&lt;/code&gt; flag because that ensures Docker Compose only deletes containers and images that are no longer in use, regardless of whether we or a running container uses them. This is crucial if you have services restarting.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting Your Containers' CPU and Memory Limits
&lt;/h1&gt;

&lt;p&gt;You can configure Docker to limit the CPU and memory of your containers by passing arguments into the &lt;code&gt;docker-compose.yml&lt;/code&gt; file before starting your container. For example, the following command will start a web service with one CPU:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you set a specific number of CPUs in the &lt;code&gt;multi_cpu&lt;/code&gt; key, it will only be used when available. If you fail to set the limit, the service will use the maximum resources it requires.&lt;/p&gt;

&lt;p&gt;Tip: If you want to run multiple containers with different memory limits on the same machine, ensure that all your containers have different memory limits. This is because each container views how much memory it needs.&lt;/p&gt;

&lt;p&gt;Note: You can use this technique for multiple services if you'd like. Docker Compose will automatically get the values from the &lt;code&gt;env&lt;/code&gt; file for each container when it starts up.&lt;/p&gt;

&lt;p&gt;Consequently, you need to understand the resource requirements of your service. This will prevent you from wasting resources and minimize production costs.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Hopefully, these tips will help you use Docker Compose more effectively in development and production. After trying out the above configuration and optimization, you should be able to build your containers efficiently. If you feel that the above approach reduces the complexity of your Docker composition setup, don't worry. There are much easier ways to organize your containerized services for development and production at &lt;a href="https://releasehub.com/" rel="noopener noreferrer"&gt;ReleaseHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockercompose</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to Solve AWS EFS “Operation Not Permitted” Errors in EKS</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Thu, 21 Jul 2022 15:14:00 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/how-to-solve-aws-efs-operation-not-permitted-errors-in-eks-2m8</link>
      <guid>https://dev.to/rwilsonrelease/how-to-solve-aws-efs-operation-not-permitted-errors-in-eks-2m8</guid>
      <description>&lt;p&gt;At &lt;a href="https://releasehub.com"&gt;ReleaseHub&lt;/a&gt;, we operate dozens of Amazon Elastic Kubernetes Service (EKS) clusters on behalf of our customers. The various workloads and application stacks we have to support are practically as diverse as the number of engineers who use our product. One very common use case is a permanent storage space for the workloads that are deployed in each environment.&lt;/p&gt;

&lt;p&gt;The most common general solution for storage in AWS for compute workloads is the Elastic Block Service (EBS), which has the advantage of being relatively performant and easy to set up. However, it has the drawback that EBS volumes are tied to a specific Availability Zone (AZ). Therefore, using Kubernetes workloads running in multiple Availability Zones (AZs), it turns out that ensuring pod workloads correctly map to the correct AZ is actually difficult to do properly and has caused numerous issues for our customers who use EBS storage in their clusters. We also discovered that EBS storage costs can add up quickly and over-provisioning volume sizes (which is a necessary evil) can add to this problem.&lt;/p&gt;

&lt;p&gt;Without going too far down the pros and cons of each storage system, we found that most customers were well satisfied with using Elastic FileSystem (EFS) mount points to provide persistent storage volumes backing the application workloads deployed to their clusters. EFS provides a good balance of performance, reliability, price (pay-for-what-you-store), and AZ diversification. As such, we made an early decision to move almost all customer workloads off EBS to EFS and only allowed the EBS option for customer workloads who specifically opt-in to it. This solution worked well for us since EKS version 1.14 all the way up until recently when we started moving customers to 1.21 and beyond.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;In our original implementation of EFS workloads in EKS, we started out using the (now retired) EFS provisioner. This solution allowed our customers to specify a volume for persistent storage and the provisioner would create a filesystem mount from an existing EFS infrastructure point (which we create automatically upon cluster creation). The customer pods would then mount this filesystem and have unlimited storage that would persist until the workload expired or was deleted, at which point the volume space would be removed. We literally experienced zero issues with this configuration from the first time we tested it.&lt;/p&gt;

&lt;p&gt;In recent months, we have been tirelessly upgrading to the latest version(s) of EKS to keep customers up to date with the latest features and deprecations in the never ending Kubernetes versions. Upon reviewing the various addons and plugins, we realised that the EFS provisioner was replaced by the modern EFS CSI driver. You can read more about the two projects in this stack overflow article.&lt;/p&gt;

&lt;p&gt;The upgrade process was not terribly difficult for us since we could easily run both provisioners side by side and then switch over workloads using the Kubernetes Storage Class objects. As one example, Customer A would be using the legacy provisioner: releasehub.com/aws-efs storage class and then we could upgrade any subsequent workloads to provisioner: efs.csi.aws.com and then test until we were satisfied with the results. Rolling back was easy to revert the workloads back to the original storage class.&lt;/p&gt;

&lt;p&gt;Eventually, after demonstrating that the process worked seamlessly and nearly flawlessly with the new driver and the same infrastructure in a variety of scenarios, we were able to confidently roll out the changes to more and more customers in a planned migration.&lt;/p&gt;

&lt;p&gt;That was when we ran into two major stumbling blocks with customer workloads that use persistent volumes: postgres and rabbitmq containers. Here are the horrible details we discovered for each:&lt;/p&gt;

&lt;p&gt;initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted&lt;/p&gt;

&lt;p&gt;‍chown: /var/lib/rabbitmq: Operation not permitted&lt;/p&gt;

&lt;p&gt;It is important to note that this could happen to any workloads that use the chown command, but these were the most common complaints we got from customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diagnosis
&lt;/h2&gt;

&lt;p&gt;At first, we did what every engineer does: we searched Google and confirmed the problems were widespread, finding stack overflow and server fault questions here and here respectively. Unfortunately, and most frustratingly, there were no good solutions to the problem(s) and even worse, many of the solutions posited by people were highly complex, tightly tied to a particular implementation, or technically brittle. There seemed to be no particularly elegant, easy solution especially for our wide diversity of customer user cases.&lt;/p&gt;

&lt;p&gt;We tried using the latest versions of the drivers to no avail. We tried even older versions of the CSI driver to see if this might have been a regression (to no avail). Digging in even deeper to EKS and EFS specifically, we discovered that dynamic provisioning (which is what we rely on to provide a seamless, fast, efficient service for workloads) was recently added to the new CSI driver. This GitHub issue (unsolved to this day) indicates that the problem has actually been in place from the beginning of the driver’s use cases.&lt;/p&gt;

&lt;p&gt;Reading through the various use cases affected was like reading a long-lost diary of all our horrible secrets and failures laid bare: including some horrific harbingers of doom we had nearly inflicted on the rest of our customers who were yet to be migrated. We quickly reviewed our test cases and made the stunning discovery that we had been testing all kinds of workloads that read and write to NFS volumes, but hadn’t tested the ones that use chown. That was the only use case we hadn’t considered, and it was the one use case that failed.&lt;/p&gt;

&lt;p&gt;The root cause of the issue is that an EFS mount point that is dynamically created for a pod workload is given a set of mapped numerical User IDs (UIDs), but the UID that is stored inside the pod workload typically will not match the UID assigned to the EFS mount point. In most use cases, the operating system will not necessarily care what UID is in use on the mounted filesystem; it will typically just blindly read and/or write to the filesystem and assume that if the operation is a success that the permissions are correct. There are a number of good reasons not to be that trusting however. For example, in a database scenario, the permissions related to reading and writing data for the storage of important information is not left to chance and the application will attempt to ensure the UID (and maybe even Group IDs [GIDs]) match.&lt;/p&gt;

&lt;p&gt;This did not answer the question of why the legacy deprecated provisioner seems to work flawlessly, but we will dig into that on another blog post.&lt;/p&gt;

&lt;p&gt;To date, there does not seem to be any way to match the UIDs so that the operating system inside the container can set or even pretend to set the UID of a directory the application needs for reading and writing so that it matches the physical infrastructure underlying Kubernetes. This is not just an academic legacy issue, it is a real concern for security and privacy reasons that affect modern applications running in modern Cloud Native environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Few Solutions
&lt;/h2&gt;

&lt;p&gt;Finally we present a few solutions, in chronological order of ones that we tried. We gradually settled on the last option as you will see the rationale behind this decision unfold.&lt;/p&gt;

&lt;p&gt;Option 1: Find every occurrence of Waldo and fix it for each customer and application workload. This option sounds as bad as you imagine it would be. Worse, it could make an easy and simple solution (pull a standard container and run it) unusable under normal circumstances. Even worse, our work would never be done: any new customers we onboard would have a new set of changes or fixes or workarounds to find and implement.&lt;/p&gt;

&lt;p&gt;For example, we could easily identify the lines affecting us in the postgresql image entrypoint and create our own version. Which you would then need to create a separate dockerfile and modify it to your tastes…for each customer and each version of postgres and operating system that is in use times the number of applications each customer uses. Or, we could try to force the UID and GID numbers to match the CSI provisioner’s UID and GID to match (again, with a splinter version of the dockerfile). Now that we have quote-unquote, allegedly, supposedly, air quotes “solved” the problem, do the exact same thing for the next application (like rabbitmq, or Jenkins, or whatever) and all the application and operating system versions. Not just now, but also moving forward into the future forever.&lt;/p&gt;

&lt;p&gt;Option 2: Try to boil the ocean to find every single species of fish and identify them. Taking a step back, it is clear that we cannot hope to ever solve every use case of chown that is out there in the wild today, not to mention new ones that are being born every year. We were able to identify that most docker images use a specific UID and GID combination and the numbers of these are fairly limited. Examining two use cases in question, we found that postgresql images tended to use 999:999 and several others used 99 or 100, perhaps 1000 and 1001. This seemed like a promising lead to a solution because you can specify the UID in the CSI provisioner.&lt;/p&gt;

&lt;p&gt;This elegant solution would result in creating several StorageClasses in Kubernetes, like say, “postgresql-999”, “rabbitmq-1001”, and so forth. Or maybe just “efs-uid-999” to be more generic. Then we would teach each customer who enjoyed a failed build or deploy stack trace to change their settings to use the appropriate StorageClass. Even better, there are only about 2^16 possible unique UIDs in Linux, so we could programmatically create all of them in advance and apply them to our cluster to be stored in etcd, ready for retrieval whenever a customer wanted a UID-specific storage class. Or to limit choices in an opinionated but friendly way, we could require all containers to use a fixed UID, like 42, in order to use the storage volumes on our platform. If a customer wanted to use a different UID, like 43, we could charge $1 for every UID above and beyond the original one.&lt;/p&gt;

&lt;p&gt;If you did not detect any sarcasm in the preceding paragraph, you may want to call a crisis hotline to discuss obtaining a sense of humour. Amazon does not sell any upon last check; although you might find a used version on Etsy or eBay. I once ordered a sense of humour and it was stolen by a porch pirate before I could bring it in. Once I had obtained a suitable one, I would occasionally rent mine out on the joke version of Uber or Lyft, and sometimes you can even spend the night in my sense of humour on AirBNB, but due to abuse and lack of adequate tipping I have had to scale my activities down lately.&lt;/p&gt;

&lt;p&gt;Option 3: When in doubt, rollback to when it worked. We ultimately had to decide that we would be unable to support the new CSI driver until an adequate solution for dynamic deployments of EFS volumes was found for EKS. In the world of open source, there is always someone who comes up with a clever solution to a common problem and that becomes the de facto implementation recommendation. Currently, we were satisfied with the original functionality of the deprecated provisioner.&lt;/p&gt;

&lt;p&gt;But this raises another issue, how do we square using a deprecated and potentially unsupported solution on a platform our customers depend and rely upon? The answer is that we can make small adjustments and updates to the yaml and source code since the original solution code is still available and can be updated by Releasehub to support our customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Sometimes we must accept that we live in an imperfect world and accept the fact that we are as imperfect as the imperfect world we live in which means that we should accept the imperfection as the correct way that things should be and thus, the imperfection we see in the world merely reflects the imperfections in ourselves, which makes us perfect in every way.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>efs</category>
      <category>eks</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How To Write Route53 Stubbed Responses For Rspec Tests</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Tue, 14 Sep 2021 16:05:32 +0000</pubDate>
      <link>https://dev.to/rwilsonrelease/how-to-write-route53-stubbed-responses-for-rspec-tests-1mm1</link>
      <guid>https://dev.to/rwilsonrelease/how-to-write-route53-stubbed-responses-for-rspec-tests-1mm1</guid>
      <description>&lt;p&gt;In this blog post, I will go over a recent exercise to fix some bugs, refactor, and write tests for some of our code related to Route53. Route53 is an AWS service that creates, updates, and provides Domain Name Service (DNS) for the internet. The reason that code unit tests are so important is because it helps reveal bugs, creates supportable and high quality code, and allows restructuring and refactoring with confidence. The downside to writing unit tests is that it can be time consuming, difficult at times, and bloating to the normal code base. It is not uncommon for unit tests’ "lines of code" (LOC) count to far exceed the LOC for the actual codebase. You would not be crazy to have nearly an order of magnitude difference in LOC for actual codebase versus LOC for unit test cases.&lt;/p&gt;

&lt;p&gt;In this case, interacting with the AWS Route53 API was daunting to test and stubbing responses seemed incredibly difficult until I found some examples written by another one of our engineers that showed how the rspec and API SDKs could be made to work in a fairly straightforward and (dare I say) downright fun method for unit testing Ruby code.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Code Under Examination
&lt;/h3&gt;

&lt;p&gt;This straightforward code snippet was my first target for unit testing. It is very simple and only does one thing. It is ripe for refactoring for readability and reusability for other sections of the code. This should be the best way to begin the project and get familiar with the rspec templates I’d be using later. Before I start refactoring and fixing bugs, I wanted to write tests. Other than the fairly “inliney” and hard to follow syntax and “magical” code, can you spot any bugs?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def route53_hosted_zone_id(subdomain)
  route53.list_hosted_zones_by_name.map do |response|
    response.hosted_zones.detect{|zone| zone.name == "#{subdomain}." }&amp;amp;.id&amp;amp;.gsub(/.*\//, '')
  end.flatten.compact.first
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Write Helpers Before the Refactor
&lt;/h3&gt;

&lt;p&gt;I am already itching to remove the magical subdomain rewriting and gsub deleting into separate methods that can be reused and are easier to read:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def cannonicalise(hostname)
  hostname = domain_parts(hostname).join('.')

  "#{hostname}."
end

def parse_hosted_zone_id(hosted_zone_id)
  return nil if hosted_zone_id.blank?

  hosted_zone_id.gsub(%r{.*/+}, '')
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Stub and Test the New Methods
&lt;/h3&gt;

&lt;p&gt;First things first, we need to do a little bit of boilerplate to get the API calls mocked and stubbed, then add a few very simple tests to get started.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# frozen_string_literal: true

require 'rails_helper'

RSpec.describe Cloud::Aws::Route53 do
  let(:route53) { Aws::Route53::Client.new(stub_responses: true) }

  subject { FactoryBot.create(:v2_cloud_integration) }

  before do
    allow(subject).to receive(:route53).and_return(route53)
  end

  describe '#parse_hosted_zone_id' do
    context 'with a valid hostedzone identifier' do
      it 'returns just the zoneid' do
        expect(subject.parse_hosted_zone_id('/hostedzone/Z1234ABC')).to eq('Z1234ABC')
      end
    end
  end
  describe '#cannonicalise' do
    context 'without a dot' do
      it 'returns the zone with a dot' do
        expect(subject.cannonicalise('some.host')).to eq('some.host.')
      end
    end
    context 'with a dot' do
      it 'returns the zone with a dot' do
        expect(subject.cannonicalise('some.host.')).to eq('some.host.')
      end
    end
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Write A Fixture
&lt;/h3&gt;

&lt;p&gt;Perfect, now we can test our new &lt;code&gt;cannonicalise&lt;/code&gt; and &lt;code&gt;parse_hosted_zone_id&lt;/code&gt; methods and we have a stubbed response coming from the Route53 API calls. Let’s write a simple new test to uncover some bugs by testing the api responses we get. The first step is to write some fixtures we can test with. Here we generate two faked stubbed responses for a very common domain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;context 'an AWS cloud integration' do
    before do
      route53.stub_responses(:list_hosted_zones_by_name, {
                               is_truncated: false,
                               max_items: 100,
                               hosted_zones: [
                                 {
                                   id: '/hostedzone/Z321EXAMPLE',
                                   name: 'example.com.',
                                   config: {
                                     comment: 'Some comment 1',
                                     private_zone: true
                                   },
                                   caller_reference: SecureRandom.hex
                                 },
                                 {
                                   id: '/hostedzone/Z123EXAMPLE',
                                   name: 'example.com.',
                                   config: {
                                     comment: 'Some comment 2',
                                     private_zone: false
                                   },
                                   caller_reference: SecureRandom.hex
                                 }
                               ]
                             })
    end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re wondering how to make these fixtures, you can easily read the AWS Ruby SDK V3 documentation for sample inputs and outputs, or you can make API calls via the AWS CLI and inspect the responses, or you can even just put in some values and see what happens when you run rspec. For example, if I remove, say, the &lt;code&gt;caller_reference&lt;/code&gt; parameter, I’ll get an error that helpfully identifies the problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aCfGHGF2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/owf6wkx5d269hjvn98ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aCfGHGF2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/owf6wkx5d269hjvn98ew.png" alt="Removing required parameters gives a helpful error message to correct the problem."&gt;&lt;/a&gt;&lt;br&gt;
You really can’t go wrong with the SDK validation and stubbed responses taken from the examples or from live requests you make with the CLI! This is already a tremendous benefit and we’re not even testing our own code yet.&lt;/p&gt;
&lt;h3&gt;
  
  
  Write a Test Case with the Stubbed Responses
&lt;/h3&gt;

&lt;p&gt;Now we can write some unit test cases and loop through several responses that we expect to find the hosted zone. Voilá we’ve uncovered some bugs just by being a little creative with our inputs! Do you see why?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe '#route53_hosted_zone_id' do
  %w[
    example.com
    example.com.
    www.example.com
    www.example.com.
    test.www.example.com
    test.www.example.com.
    deep.test.www.example.com
  ].each do |hostname|
    context 'for hosts that exist in the parent zone' do
      it "returns the hosted_zone_id for #{hostname}" do
        expect(route53).to receive(:list_hosted_zones_by_name).with(no_args).and_call_original
        hosted_zone_id = subject.route53_hosted_zone_id(hostname)
        expect(hosted_zone_id).to eq('Z123EXAMPLE')
      end
    end
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x1qKmG8n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nyk4clqkcrd79jly67vo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x1qKmG8n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nyk4clqkcrd79jly67vo.png" alt="With some creativity in test inputs and stubbed responses from the API, we can uncover some edge cases and bugs to fix!"&gt;&lt;/a&gt;&lt;br&gt;
What these failed test cases are telling us is that the code worked under perfect conditions but in strange scenarios that may not be uncommon (for example, having an internal private zone and public zone with the same name, or selecting a two-level-deep name in a zone) could cause unpredictable behaviours.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Solution is an Exercise for the Reader
&lt;/h3&gt;

&lt;p&gt;Now we merely need to write or refactor the code from our original snippet to pass all of our new test cases. One of the issues that our test cases revealed was that two-level-deep names (say, test.&lt;a href="http://www.example.com"&gt;www.example.com&lt;/a&gt; in the zone example.com) would be missed. We also needed a way to ensure that zones are not private, perhaps with an optional parameter to specify private zones. Here is an example that passes all the existing tests and welcome feedback on any other bugs or optimisations you find.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def route53_hosted_zone_ids_by_name(is_private_zone: false)
  # TODO: danger, does not handle duplicate zone names!!!
  hosted_zone_ids_by_name = {}
  route53.list_hosted_zones_by_name.each do |response|
    response.hosted_zones.each do |zone|
      if !!zone.config.private_zone == is_private_zone
        hosted_zone_ids_by_name[zone.name] = parse_hosted_zone_id(zone.id)
      end
    end
  end
  hosted_zone_ids_by_name
end

def route53_hosted_zone_id(hostname)
  # Recursively look for the zone id of the nearest parent (host, subdomain, or apex)
  hosted_zone_ids_by_name = route53_hosted_zone_ids_by_name

  loop do
    hostname = cannonicalise(hostname)
    break if hosted_zone_ids_by_name[hostname].present?

    # Strip off one level and try again
    hostname = domain_parts(hostname).drop(1).join('.')
    break if hostname.blank?
  end
  hosted_zone_ids_by_name[hostname]
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Congratulations
&lt;/h3&gt;

&lt;p&gt;All test cases now pass! Keep writing tests until you get nearly 100% coverage!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rspec</category>
      <category>route53</category>
      <category>testdev</category>
    </item>
  </channel>
</rss>
