<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Roost.io</title>
    <description>The latest articles on DEV Community by Roost.io (@roost).</description>
    <link>https://dev.to/roost</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/roost"/>
    <language>en</language>
    <item>
      <title>A multi-node and production like local kubernetes development environment</title>
      <dc:creator>Sudhir Jangir</dc:creator>
      <pubDate>Fri, 30 Apr 2021 09:06:28 +0000</pubDate>
      <link>https://dev.to/roost/a-multi-node-and-production-like-local-kubernetes-development-environment-4l93</link>
      <guid>https://dev.to/roost/a-multi-node-and-production-like-local-kubernetes-development-environment-4l93</guid>
      <description>&lt;p&gt;For Developers to run and test their containerized applications on a local system, a few options are available like Docker Desktop, Minikube, Kind or K3s, and many more. There is one more to the list — Roost, which is a complete end-to-end development platform with easy integration to various tools like Jenkins, Argo, Falco, Linkerd, or Istio.&lt;/p&gt;

&lt;p&gt;In this article, we will compare Docker Desktop with Roost. Both of them use native virtualization technology available by default.&lt;/p&gt;

&lt;p&gt;While both of them are one-click installs, Roost comes with a lot of enterprise-level configuration options. Docker desktop is a single node cluster, Roost gives your development team a multi-node Kubernetes environment on their local systems (Mac, Windows, and Ubuntu). Roost users get a rich UI to configure the cluster as per their needs.&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--flaLcA6H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zoq0jcxc4kk0xxmxyvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--flaLcA6H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zoq0jcxc4kk0xxmxyvq.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Docker Desktop is controlled and managed by individual developers. While working in a team, you need consistency in your development environment. Roost also has a SaaS Control Plane component with unique Left-Shifted Enterprise Policies, allowing you to create teams and define cluster policies/configuration, enabling your team’s consistent and production-like development experience. At the same, Roost allows you to have these development clusters on-prem or the cloud.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BFCvAe-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xkleah80pb9pb1hz5r25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BFCvAe-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xkleah80pb9pb1hz5r25.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Even with multiple nodes, Roost is much more lightweight and easy to configure. Making a configuration or a policy change on the development machine with the Roost control plane can easily be controlled by team admins. Policies can also be changed dynamically and can be different for various teams. A developer may also be part of multiple teams; switching teams is just a click away, and your cluster configuration will change accordingly. With Docker Desktop, you get an isolated environment, which is way different than production.&lt;/p&gt;

&lt;p&gt;Roost has a Team Dashboard built-in, which allows you to audit logs for compliance needs. They may be related to Docker file inspection or scanning or any other cluster event. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1dzYfbjR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3e8yhbgue99g1vho39qs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1dzYfbjR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3e8yhbgue99g1vho39qs.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
On Roost, teams can also create a shared Team cluster, and users, if a need is there, would be able to acquire an exclusive lock on the cluster. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>dockerdesktop</category>
      <category>roost</category>
    </item>
    <item>
      <title>The swing of the "works on my machine" pendulum!</title>
      <dc:creator>rishiyadav</dc:creator>
      <pubDate>Thu, 29 Apr 2021 04:37:47 +0000</pubDate>
      <link>https://dev.to/roost/the-swing-of-the-works-on-my-machine-pendulum-3eed</link>
      <guid>https://dev.to/roost/the-swing-of-the-works-on-my-machine-pendulum-3eed</guid>
      <description>&lt;h1&gt;
  
  
  Why dev-prod disparity is such a big issue?
&lt;/h1&gt;

&lt;p&gt;We need to get to the history of the gap between development and production systems. Once upon a time, during the era of punching cards, there was no gap between development and production systems. If there were no errors, time to deploy was close to zero. If there was even a single error, developer had to start all over again (almost, though there was some reusability) and wait in line to get a chance to schedule next run. There was no WOMM issue at that time as the giant mainframe was everyone's machine.&lt;/p&gt;

&lt;p&gt;Next era was the era of thick clients. Think of visual basic running on MS Access database. This was the brief golden age of WOMM being true. If it worked on developer machine, it had to work on the production machine (unless machine was corrupted). &lt;/p&gt;

&lt;p&gt;Next era was of client-server. To scale applications, a server component was added and that brought in server administrators. Servers were beefier machines. They were mostly Unix (like Solaris, HP-UX). Initially developers worked on these machines but it was not a very scalable model. This problem was solved with the popularity of Java. The slogan "write once, run anywhere" actually meant "write on cheap machines and run on expensive machines. This was the sunset era of WOMM. If it worked on developer machine, mostly it worked on the production systems. &lt;/p&gt;

&lt;p&gt;Next era was the era of Linux. Though Linux was invented in early 90s, its real value became visible only in early 2000s. There were various factors which worked as perfect storm. One of them was the dot-com bust and companies looking for cheaper alternatives. Another was Google making server farms a cool and popular idea. As more and more applications started being deployed in scaled out fashion using server farms, the dark ages for WOMM started. There was too much traffic on the network and applications not only had to care about their own reliability but also network reliability.  &lt;/p&gt;

&lt;p&gt;Next came the cloud-era (no pun intended). Migration to cloud, presented an opportunity to rethink application design. This also got help of the popularity of service oriented architecture. Initial popularity of services, led to microservices and now nano-services. This created the peak of dark ages for WOMM. Now not only servers were scaled out but even functions were spread out on the network. This also created the opportunity for a lot of policing between developer and production. This policing has only made the production problems worse as now developer to production gap is at its worst. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Why Are We Still Having ‘Works on My Machine’ Problems?</title>
      <dc:creator>rishiyadav</dc:creator>
      <pubDate>Thu, 29 Apr 2021 00:50:22 +0000</pubDate>
      <link>https://dev.to/roost/why-are-we-still-having-works-on-my-machine-problems-31m3</link>
      <guid>https://dev.to/roost/why-are-we-still-having-works-on-my-machine-problems-31m3</guid>
      <description>&lt;p&gt;As much as computer programming has advanced over the past two decades, developers and operators are still dealing with “works on my machine” problems — an application that works great on the laptop but is completely non-functional in production or on a colleague’s laptop. Why are we still having this problem?&lt;/p&gt;

&lt;p&gt;I think of “works on my machine” as a function of how much control developers have over the production environments and how identical the development and production environments are. Over the short history of computer science the pendulum has swung a couple of times, leading to more or less “works on my machine” problems.&lt;/p&gt;

&lt;p&gt;Let’s think back to the early days of computer programming, when programming a computer involved punch cards. Any mistake on the punch card meant you had to punch those cards again. Developers were coding in production, and the cost of each mistake was high. But mistakes were immediately apparent and developers were working as close to production as possible. Everyone was working on the same machine, so there were no “works on my machine” issue.&lt;/p&gt;

&lt;p&gt;As developers started using client servers and then programming on their own machines, the distance between the production environment and the development environment started increasing. This is when “works on my machine” started becoming a serious issue for software engineering teams.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The shorthand “Works on my machine” is a function of how much control developers have over the production environments and how identical the development and production environments are.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then came the cloud. At first, cloud was really Shadow IT, used and configured by developers to run non-critical applications. At that stage, developers had control over the cloud and “works on my machine” problems decreased.&lt;/p&gt;

&lt;p&gt;Now, though, as cloud has moved from Shadow IT to mainstream and more layers of control have been put on how cloud environments are set up, the distance between what developers are doing in their IDEs and what the production environment looks like is increasing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Seeing What Sticks
&lt;/h1&gt;

&lt;p&gt;You can’t really work directly in the cloud — and there are good reasons that we don’t have developers working in the production environment like in the mainframe era. Now we have isolated systems for developers so that they can safely make mistakes while developing. At the same time, developers are being woken up at two in the morning because their code doesn’t work in production — they don’t have the tools to easily debug the problem if everything worked perfectly on the laptop. Bugs coming home to Roost, someone may say.&lt;/p&gt;

&lt;p&gt;At the moment, most companies are addressing the “works on my machine” problem with a mixture of the following techniques:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Reducing velocity. More robust testing is one strategy for catching potential problems before they reach production. We would like to think that all testing is 100% automated and instantaneous, but that is not true. A more robust testing procedure will slow down development velocity and still not ensure that all “works on my machine” problems are caught before production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trial and error. Organizations talk about getting through issues in production by deploying more frequently or by using advanced deployment techniques like canary deployments. This is a euphemistic way of saying that they are using trial and error to solve “works on my machine” problems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Establishing more stringent deployment procedures. Organizations also try to address “works on my machine” problems by establishing increasingly rigid deployment procedures and putting in both guardrails and roadblocks on the deployment pipeline, hoping that problems will be caught before production.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem with these approaches is that neither of them are actually solving the problem or giving the developer a better way to proactively ensure that the service will work correctly in production before it even enters the integration process.&lt;/p&gt;

&lt;h1&gt;
  
  
  Empowering Developers
&lt;/h1&gt;

&lt;p&gt;After all these years and all these late nights of frustration, you’d think that the software engineering as an industry would have figured out a better way to prevent “works on my machine” problems. The real solution, though, has to involve decreasing the distance between the development environment and the production environment so that developers are automatically able to develop in an environment that’s identical to production, including having access to the latest versions of upstream and downstream dependencies and running with the same configurations. As an industry, we talk a lot about shortening the feedback loop. Developers should be alerted that there might be a service compatibility issue or that an update won’t work in production before it leaves their machine, not after a failed canary deployment. That’s the only way we’ll end up eliminating the “works on my machine” problem for good.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Communication is not collaboration!</title>
      <dc:creator>rishiyadav</dc:creator>
      <pubDate>Wed, 28 Apr 2021 22:36:56 +0000</pubDate>
      <link>https://dev.to/roost/communication-is-not-collaboration-536o</link>
      <guid>https://dev.to/roost/communication-is-not-collaboration-536o</guid>
      <description>&lt;p&gt;Given how the individual unit of deployment has gotten increasingly smaller, such that the unit of deployment is usually a microservice, it would be tempting to think that individual developers are more able to control how well the code they write (the service they deploy) will work in production. In fact, though, almost the opposite is true. Instead, as the number of services in an environment increases, whether or not one service will work correctly in a production environment has more to do with how well it works with other services rather than the quality of the code.&lt;/p&gt;

&lt;p&gt;This makes collaboration between developers even more critical than ever before, but the packaging of code in microservices actually makes collaboration more challenging. Developers don’t collaborate by working together on a shared codebase — instead they work completely independently on services that could easily be written in different languages, providing the illusion of complete independence.&lt;/p&gt;

&lt;p&gt;In most organizations, ensuring that services work together is a largely informal process. It’s managed by walking over to a colleague’s work station and asking what updates they’re working on or what version libraries they’re using. In a remote work environment, teams rely on Slack messages or Zoom calls to solve issues relating to services interaction. This creates a lot of friction for something that is a critical part of the software development workflow. Failures in service integration can and do cause problems in production.&lt;/p&gt;

&lt;h1&gt;
  
  
  Pay Attention to Service Communications
&lt;/h1&gt;

&lt;p&gt;A perfectly crafted microservice is not going to run in isolation in the production environment. Compatibility issues between each service and upstream and downstream dependencies are just as likely to cause bugs, downtime and poor performance as problems with the code deployed in a container.&lt;/p&gt;

&lt;p&gt;The only way to catch compatibility issues before production is for developers to work together and communicate about the updates they’re working on and how those might impact interdependent services. In the current system, however, this level of collaboration introduces a lot of friction in the development workflow, reducing the individual’s and the team’s velocity. Since developers are generally evaluated based on velocity and code quality — but not necessarily on how well their services work with other services — if the friction remains, developers will be tempted to deploy without fully understanding how dependencies will be impacted.&lt;/p&gt;

&lt;p&gt;Since CI/CD pipelines are generally not set up to test how well services communicate with each other, problems with service fit that aren’t addressed at the development stage generally aren’t found until either a canary deployment goes wrong or there are problems in production.&lt;/p&gt;

&lt;h1&gt;
  
  
  Replacing Tribal Knowledge
&lt;/h1&gt;

&lt;p&gt;One of the other reasons companies need to focus on developer collaboration is that remote work has made it even harder to communicate tribal knowledge. In my experience, no development team has been able to completely wean themselves off of tribal knowledge. Even teams with extensive technical documentation often had or have experienced team members who maintain a unique understanding of the application’s business logic and can share that knowledge when needed.&lt;/p&gt;

&lt;h1&gt;
  
  
  Tribal knowledge also relates to service integration and communication — often, insights about how services work together isn’t well-documented, even if each individual service is well-documented. Often, the easiest way to see how they would work together is not to have the two developers responsible for each service work together, but rather to have a senior developer take a look. Especially as development teams work remotely, that type of review isn’t feasible.
&lt;/h1&gt;

&lt;h1&gt;
  
  
  How to Help Developers Collaborate
&lt;/h1&gt;

&lt;p&gt;Most teams have ways for teams to communicate — they have a Slack setup, they have internal forums and video conferencing solutions. But they often don’t realize that these communication platforms aren’t really collaboration platforms — they are not providing a way for developers to actually work together on interdependent services the way they might if working on a shared codebase. Communication isn’t enough for true collaboration to happen — developers need a way to work together and see how colleagues’ changes impact their own services.&lt;/p&gt;

&lt;p&gt;Facilitating true collaboration, the kind you would get from two people looking at the same screen and coming up with solutions together, is more challenging than making it easier for developers to talk to each other. Without true collaboration, teams will continue to have trouble with communication between services — and there’s no Slack for that.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
