<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Margot Mueckstein</title>
    <description>The latest articles on DEV Community by Margot Mueckstein (@makky).</description>
    <link>https://dev.to/makky</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/makky"/>
    <language>en</language>
    <item>
      <title>The root cause of “works on my machine”</title>
      <dc:creator>Margot Mueckstein</dc:creator>
      <pubDate>Thu, 27 Jun 2024 12:20:46 +0000</pubDate>
      <link>https://dev.to/makky/the-root-cause-of-works-on-my-machine-2ec0</link>
      <guid>https://dev.to/makky/the-root-cause-of-works-on-my-machine-2ec0</guid>
      <description>&lt;p&gt;I recently read an article about rare illnesses. Right in the opening sentence, it said something that confused me: Rare illnesses are the most common illnesses. How does this make sense?&lt;/p&gt;

&lt;p&gt;It makes sense because the “rare” refers to the number of people affected by one individual type of rare illness, while the “most common” refers to the total number of people who suffer from any arbitrary type of rare illness.&lt;/p&gt;

&lt;p&gt;It is exactly the same with the problem of “works on my machine”. The number of different things that can go wrong when deploying software is so big that each individual problem might only arise under very specific circumstances and is therefore rare in itself.&lt;/p&gt;

&lt;p&gt;This is the root cause of “works on my machine”: &lt;strong&gt;the most common types of problems are rare problems. Most problems are unique.&lt;/strong&gt; When looking at a graph that represents different problems that arose when deploying a software in different environments, this leads to a long tail of rare and unique problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshyayc4umypjslywikqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshyayc4umypjslywikqg.png" alt="The long tail of problems in software deployment" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This long tail of problems makes it extremely difficult to ensure reliable deployment processes that work in many different environments.&lt;/strong&gt; When deploying software in a production setting, this is typically solved by standardising the deployment environment. But this is pretty much impossible when it comes to local deployment on individual developer’s laptops. Even if the basic setup is standardised, the sheer variety of hardware and software that can – and will – be present on developer’s laptops very often leads to problems in local deployment of software. And the majority of these problems are unique.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problems that affect a majority or even just several developers are typically automated away quickly.&lt;/strong&gt; The problem is that this still leaves &lt;strong&gt;a lot of problems that affect only one or a few developers.&lt;/strong&gt; Addressing these rare problems in deployment automation often doesn’t warrant the time and effort that would go into it, because so few people are affected. So &lt;strong&gt;individual developers are left to deal with these problems on their own.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;**As a consequence, many developers spend a lot of time troubleshooting their local deployments. **Depending on the complexity of local deployment, this can amount to a little, or a lot of developer’s time. On average, developers spend around 10% of their time troubleshooting their development environments. For some developers, this number is a lot higher.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to squash the long tail
&lt;/h2&gt;

&lt;p&gt;There is only one way to get rid of this time sink: By addressing root causes of several individual issues, and providing solutions that prevent them from arising in the first case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containerisation&lt;/strong&gt; is one such solution that prevents a lot of issues from arising, by providing isolated and standardised environments for software components to run in. Package managers that allow to install and use several versions of a package next to each other (&lt;strong&gt;isolated packages&lt;/strong&gt;, e.g. Nix) are another neat approach that chop off a large section of the long tail of unique problems. **Standardisation **of developer’s laptops is an often-tried approach with mixed outcomes (Read: &lt;a href="https://cloudomation.com/en/cloudomation-blog/problems-with-local-development-environment-containerisation-as-the-solution/"&gt;Problems with the local development environment: Is containerisation the solution?&lt;/a&gt;), but at least conceptually it is sensible, because it tries to address an underlying cause instead of thousands of individual problems.&lt;/p&gt;

&lt;p&gt;Recently, another approach has gained popularity: &lt;strong&gt;remote development environments (RDEs)&lt;/strong&gt; or &lt;strong&gt;&lt;a href="https://cloudomation.com/en/cloud-development-environments/"&gt;cloud development environments (CDEs)&lt;/a&gt;&lt;/strong&gt;. These are work environments that are provided remotely where software developers can deploy and run the software they work on, together with any development tools they need.&lt;/p&gt;

&lt;p&gt;As complete environments, RDEs go several steps further than containers by providing isolated environments from the operating system up. RDEs are also a lot simpler to standardise because they exist in addition to developers laptops: Developers are free to use whatever hardware they like and install whatever software they want on their laptops, without interfering with their standardised RDEs. As such, RDEs will get rid of the long tail of unique problems almost entirely.&lt;/p&gt;

&lt;p&gt;If you want to learn more about RDEs/CDEs, here are some additional resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloudomation.com/en/cloudomation-blog/what-are-remote-development-environments/"&gt;What are RDEs?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudomation.com/en/cloudomation-blog/remote-development-environments-tools/"&gt;7 RDE tools at a glance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudomation.com/en/cloudomation-blog/how-cdes-work/"&gt;How CDEs work – no bs blog post&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudomation.com/en/remote-development-environments-vs-local-development-environments-comparison/"&gt;Whitepaper: Local Development Environments vs. Remote Development Environments&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>softwaredevelopment</category>
      <category>cde</category>
      <category>devex</category>
    </item>
    <item>
      <title>Does every developer need to know how to deploy software?</title>
      <dc:creator>Margot Mueckstein</dc:creator>
      <pubDate>Thu, 06 Jun 2024 11:00:33 +0000</pubDate>
      <link>https://dev.to/makky/does-every-developer-need-to-know-how-to-deploy-software-3gl7</link>
      <guid>https://dev.to/makky/does-every-developer-need-to-know-how-to-deploy-software-3gl7</guid>
      <description>&lt;p&gt;Talking to directors of engineering and CTOs, I have heard many of them say that they absolutely expect every developer to know how to deploy their software. When I ask why, the answer is that knowing how the software is deployed makes developers better developers.&lt;/p&gt;

&lt;p&gt;In this blog post, I want to take a look at this belief. Does knowledge about deployment make developers better at their job? If yes, how so? And what is the best way to teach developers about deployment?&lt;/p&gt;

&lt;p&gt;I also recorded a video on this topic: &lt;a href="https://www.youtube.com/watch?v=ZaEFX5DS25g"&gt;https://www.youtube.com/watch?v=ZaEFX5DS25g&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do most developers know about deployment?
&lt;/h2&gt;

&lt;p&gt;A majority of developers run the software that they work on locally, as part of their development environment. This is why many developers know a lot about the deployment of their software: Because they do it regularly in order to validate their code changes in local deployments.&lt;/p&gt;

&lt;p&gt;However &lt;strong&gt;the sad truth is that deployments are often very complicated, and developers spend a lot of time and nerves getting local deployments to work.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fortunately, there is an alternative. &lt;a href="https://cloudomation.com/en/cloud-development-environments/"&gt;Cloud Development Environments&lt;/a&gt; (CDEs) make it possible for each developer to have private playgrounds where they can validate their code changes before they commit them to a shared repository. CDEs are functionally equivalent to a local deployment, with the difference that they are fully automated and provided to developers remotely. With CDEs, developers can build, test and deploy their code in their own private environment in the CDE with fast feedback loops and with no danger of interfering with other developer’s work – and without having to know anything about deployment.&lt;/p&gt;

&lt;p&gt;This is why it makes sense now to ask if developers need to know about the deployment of the software they work on. Previously, there was no other option. Now that there is an alternative, I argue that we need to rethink the scope of what developers have to do and know about.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the upsides of knowing about deployment?
&lt;/h2&gt;

&lt;p&gt;Knowing how to deploy their software can enable developers to make better decisions when writing code, particularly around:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configuration of their software&lt;/li&gt;
&lt;li&gt;Core architecture of their software&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. Configuration of their software
&lt;/h3&gt;

&lt;p&gt;When a developer has to deploy the software they work on themselves, they will intimately know how this software has to be configured. Since developers are also the ones who decide how configuration can be specified for their software, they are much more likely to consider the user experience of configuration when developing configurable features. This is probably the main benefit of forcing developers to deploy their software. Choices about how a software can be configured are something a (backend) developer has to do reasonably frequently.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Core architecture of their software
&lt;/h3&gt;

&lt;p&gt;The core architecture of a software hugely influences how simple or complex its deployment is. Deployment therefore has to be considered when deciding on the core architecture of a software. However, since architecture decisions are typically made once early in the development of a software, it is the architects or CTOs who take these core architecture decisions who have to know how they plan to deploy their software. For the majority of software developers, this is irrelevant.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary of upsides
&lt;/h3&gt;

&lt;p&gt;So we are left with configuration. It is undoubtedly true that a developer that has had to deploy software that is hard to configure will be more motivated to make their software easily configurable. But relying on this as the mechanism to ensure well-designed configuration for a software is a bad idea. Like any aspect of the software that has a large impact on user experience (in this case the experience of the deployment and operations team of a software), it is something that should be designed by a knowledgeable specialist, who provides guidance on how configuration should be done for a software that other developers then have to follow. This is how feature design works, after all. Otherwise, it will still be each developer deciding on their own what they consider “good and simple configuration”, which will be different for each developer, which most likely again leads to a poor configuration experience overall.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the downsides of knowing about deployment?
&lt;/h2&gt;

&lt;p&gt;There are two main downsides of requiring developers to know how to deploy their software:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Time sink: It eats up time and headspace.&lt;/li&gt;
&lt;li&gt;The myth of the full-stack developer: Few people have the skill and inclination to be good at both coding and deployment.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. Time sink
&lt;/h3&gt;

&lt;p&gt;Knowing about deployment, and having to deploy their software on their own laptops regularly as part of their daily work, are two different things. Unfortunately, the latter is the sad reality for many developers, and it is justified with the “need to know about deployment”. I have already described that knowledge about the deployment of a software has only marginal benefits for developers. In addition to this, the second misconception is that local deployments as part of a developers work are a good way to teach developers the things that they should know about deployment. It is not.&lt;/p&gt;

&lt;p&gt;If you want your developers to know about the pains of deployment, it may be a good idea to ask them to manually deploy the software they work on as part of their onboarding, or as a regular exercise every once in a while. If you really think that knowing about deployment is valuable for your developers, then this is a good way to teach them: If the deployment is painful, they will remember it very well.&lt;/p&gt;

&lt;p&gt;If developers do local deployments daily, they will get used to many of the pains of it and lose awareness. That removes even the marginal benefit of developer’s knowledge about deployment: They might not even try to make it better anymore.&lt;/p&gt;

&lt;p&gt;But the worst part is that it eats up developers time and headspace on a daily basis. It is a cost factor that many companies are not much aware of, because time spent troubleshooting local deployments is typically not tracked separately. Instead, it is padded on top of each task that a developer works on. But &lt;strong&gt;the time spent on local deployment can reach as much as 25% of a developer’s time (in extreme cases), and typically is somewhere between 5-10% of time of a developer&lt;/strong&gt; when it works fairly well.&lt;/p&gt;

&lt;p&gt;That is a LOT of time!&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The myth of the full-stack developer
&lt;/h3&gt;

&lt;p&gt;The scope of what a developer is supposed to know is seemingly endless. Even though many specific job titles exist that describe people whose primary focus is deployment and operation of software (operations, devops, site reliability engineer (SRE), …), developers are often assumed to be able to fulfill those functions in addition to their primary function. Often, testing, user experience, architecture, backend and frontend development are also mingled in, leading to the all-encompassing job description of full-stack developer.&lt;/p&gt;

&lt;p&gt;There are people who know a lot about many aspects of software development, who can reasonably claim to be full-stack developers and do a decent job in any of the mentioned areas. But for most developers, working as full-stack developers will result in products like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e7uexo9a3t9syoouie8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e7uexo9a3t9syoouie8.jpg" alt="Frontend and backend development" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The truth is that most people, including developers, hugely benefit from specialisation. Having one area to focus on where one can build up knowledge allows one to reach higher levels of productivity and expertise much faster than when developers are asked to learn about everything at once.&lt;/p&gt;

&lt;p&gt;This is especially true for complex software. State-of-the-art business software nowadays often has a lot of components and highly complex deployment logic. As long as deployment can be expressed with a simple “npm run build”, any developer will be able to handle that. But that is hardly ever the case anymore. Many developers spend 10% or more of their time just managing local deployments. The majority of this time, they spend on troubleshooting. But in order to troubleshoot local deployments, developers do not only have to spend time – they also have to know a lot about tools like Docker or minikube or other tools specific to the deployment of their software.&lt;/p&gt;

&lt;p&gt;Bottom line: &lt;strong&gt;Expecting a very broad skillset from developers will exclude the majority of developers from fulfilling such a role successfully.&lt;/strong&gt; Even those few full-stack developers that do exist, each one will have specialties and areas where they are less skilled. Finding people who are good in one area is much simpler and will lead to much better outcomes for everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deployment is an important aspect of a software. It is probably a good idea for most developers to know at least a little bit about the deployment of their software, much in the same way as it is a good idea for each developer to know how to use the software they work on so that they can make better decisions about user experience.&lt;/p&gt;

&lt;p&gt;However, much in the same way, developers are not generally required to (or trusted with) making decisions about user experience on their own, even if they have intimate knowledge of the software. It is simply not their speciality. There are user experience designers for a reason, because it is a complex area of expertise that requires knowledge and inclination that doesn’t necessarily overlap with that of a developer whose job it is to write code.&lt;/p&gt;

&lt;p&gt;It is exactly the same with deployment. Deployment experts exist for a reason – because it is a complex area of expertise that not every developer should be expected to master, on top of their development expertise. Developers should be required to follow best practices or company-internal guidelines when making decisions that influence deployment. But they should not have to spend hours and hours each day struggling with local deployment.&lt;/p&gt;

&lt;p&gt;My conclusion: CTOs and directors of engineering expect their developers to handle deployment because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It has always been this way and they may not yet realise that it is not necessary anymore.&lt;/li&gt;
&lt;li&gt;Knowing about deployment serves as a proxy for the general skill and knowledge of a developer. (I could also put it more bluntly: It propagates the unhelpful stereotype of the all-knowing full-stack developer as the ideal developer.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Neither reason stands up to scrutiny.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line: It’s expensive and has few benefits
&lt;/h2&gt;

&lt;p&gt;To summarise: Few developers have the inclination, experience and skillset to fulfil the stereotype of the full-stack developer that knows how to code and deploy their software. Knowing about the deployment of a software has only marginal benefits at best, but requires a lot of time and energy to learn and manage.&lt;/p&gt;

&lt;p&gt;Consequently, forcing developers to learn about Docker, minikube, network configurations and a whole lot of other things and tools that they need only for local deployment is a huge waste. Developers generally don’t even like doing this. It is a drag on both productivity and happiness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is good news!&lt;/strong&gt; It means that there is a huge amount of time and headspace that developers could stop investing in local deployment. There is a big opportunity to make developers a lot happier and more productive. &lt;/p&gt;

&lt;p&gt;And fortunately, it is easily possible to spare developers the pains of local deployments. CDEs are tools specifically designed to do this. They allow developers to focus on writing great code, without having to worry about deployment.&lt;/p&gt;

&lt;p&gt;More about CDEs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Article: &lt;a href="https://cloudomation.com/en/cloudomation-blog/where-cdes-bring-value-and-where-they-dont/"&gt;Where CDEs bring value (and where they don’t)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Article: &lt;a href="https://cloudomation.com/en/cloudomation-blog/remote-development-environments-tools/"&gt;Cloud / Remote Development Environments: 7 tools at a glance&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Whitepaper: &lt;a href="https://cloudomation.com/en/whitepaper-en/cde-vendors-feature-comparison/"&gt;Full list of CDE vendors (+feature comparison table)&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>development</category>
      <category>devops</category>
      <category>devex</category>
    </item>
    <item>
      <title>How to work with shared dev clusters (and why) - Part III: What works, and what doesn't work</title>
      <dc:creator>Margot Mueckstein</dc:creator>
      <pubDate>Tue, 07 May 2024 06:26:02 +0000</pubDate>
      <link>https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-iii-what-works-and-what-doesnt-work-gf2</link>
      <guid>https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-iii-what-works-and-what-doesnt-work-gf2</guid>
      <description>&lt;p&gt;This is the last article of this 3 part series. In the &lt;a href="https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-i-stop-laptops-from-burning-5c9j"&gt;first article&lt;/a&gt;, you learned about the challenges when devs who work on Kubernetes-based applications try to run all services locally. In the &lt;a href="https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-ii-why-backwards-compatibility-is-overrated-1cd9"&gt;second one&lt;/a&gt; I showed you how quickly factorials grow when services aren’t shareable, why you most certainly already have a complex setup (it’s just less visible when you only take a look at the individual level of developers) and what the real costs of running everything locally are. &lt;/p&gt;

&lt;p&gt;Now we take a look at the solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What works?
&lt;/h2&gt;

&lt;p&gt;Of the many developers that I have talked to, the only ones that were happy were those that: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Were able to build locally and validate their changes before pushing to staging,&lt;/li&gt;
&lt;li&gt;Had full access to the services they were working on, either by running them locally (majority) or having full access to a remote deployment (minority),&lt;/li&gt;
&lt;li&gt;Had access to highly automated deployment available to every developer that removed (or reduced) the need to deal with the factorial complexity of their deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to get there
&lt;/h2&gt;

&lt;p&gt;Getting to this point means:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Having automation that is capable of dealing with factorial complexity,&lt;/li&gt;
&lt;li&gt;Having remote computing resources available to enable deployments for each developer,&lt;/li&gt;
&lt;li&gt;Making services shareable so that dev deployments are affordable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fortunately, 1 &amp;amp; 2 are things you can buy. Unfortunately, 3 is something that you have to take care of internally: Your software needs to be able to support sharing services. Otherwise, the cost of providing dev deployments can be prohibitively high. However, as explained in &lt;a href="https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-ii-why-backwards-compatibility-is-overrated-1cd9"&gt;part two&lt;/a&gt;, this is something that can be worked on iteratively, one service at a time. This will require a change of thinking: Let backwards compatibility go out the window and focus on building something that will work in the future as well. &lt;/p&gt;

&lt;h2&gt;
  
  
  (1) Dynamic configuration management + modular automation
&lt;/h2&gt;

&lt;p&gt;Automation that is capable of dealing with factorial complexity means two things: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ability to express multi-faceted constellations of dependencies and constraints in a maintainable way, which is the basis for creating configurations for specific deployments dynamically&lt;/li&gt;
&lt;li&gt;Automation that is modular, allowing the re-use of automation steps to create many different outcomes (i.e. deployments)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dynamic configuration management is an approach to managing configuration that makes it possible to deal with factorial complexity without having hundreds of different config files flying around that are all slightly different and hard to maintain. Instead, you have a system - a configuration database or a similar system - that allows you to define a data model that describes your dependencies and constraints. &lt;/p&gt;

&lt;p&gt;Modular automation that is associated with a dynamic configuration management system allows you to automatically deploy a large number of different configurations, making it simple to manage factorial complexity. It ensures that all dependencies and constraints are taken into account, and provides fully automatic deployment to anyone. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloudomation Engine is an automation platform with a built-in dynamic configuration management system.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloudomation.com/en/cloudomation-engine/"&gt;Cloudomation Engine&lt;/a&gt; is an automation platform that is built for precisely this: it has a built-in configuration database that allows you to define custom configuration data models that describe your deployments. You can start with templates that describe common deployment scenarios and extend it to your own needs. You can also ask us to create a deployment data model for you based on your existing configuration files and deployment scripts. &lt;/p&gt;

&lt;p&gt;For example, in this data model you could define which services of which versions are compatible with each other. You could also define which services are shareable and which aren’t. You can also define any additional deployment options, for example that your software can be deployed to an EKS or a GKE cluster - or anything else that is relevant for the deployment of your software. &lt;/p&gt;

&lt;p&gt;The next step is to connect this data model that describes your deployments with modular automation to deploy the software with any of the possible configuration options. &lt;/p&gt;

&lt;p&gt;In a first step, your data model can be small and express only the most common configurations for which you probably already have deployment scripts. These existing scripts can be referenced, allowing you to get started quickly and reusing existing scripts and configs.&lt;/p&gt;

&lt;p&gt;Over time, you can separate your scripts into more modular automation steps that allow you to dynamically create more and more different deployment options. The data model can be extended in lockstep, allowing you to extend your automatic deployment capabilities iteratively. &lt;/p&gt;

&lt;p&gt;In addition, &lt;a href="https://cloudomation.com/en/cloudomation-devstack/"&gt;Cloudomation DevStack&lt;/a&gt; is a platform for &lt;a href="https://cloudomation.com/en/cloud-development-environments/"&gt;cloud development environments&lt;/a&gt; that allows you to combine the automatic deployment of your software with automatic deployment of development tools. This is exposed to developers via a self-service portal that allows each developer to deploy full development environments that include the software they work on. If developers want to deploy some services locally and connect them to a remote cluster, DevStack supports them by providing relevant config files and scripts that are tailored to the specific deployment they need. Where required, DevStack automatically deploys the remote cluster or remote services to an existing cluster, or just provides the relevant configurations if all required remote resources already exist. &lt;/p&gt;

&lt;p&gt;Cloudomation Engine and Cloudomation DevStack make it possible for developers to work on complex Kubernetes-based software without having to deal with the complexity of running all services locally, or of running some services locally and connecting them to a remote cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  (2) Sufficient compute resources
&lt;/h2&gt;

&lt;p&gt;Compute resources to run the application they work on can be provided locally - by buying developers really beefy laptops - or remotely. Remote computation becomes the only option once the application becomes too heavy-duty for laptops. &lt;/p&gt;

&lt;p&gt;Fortunately, buying remote computation is easy. The problem here is not buying computation from a cloud provider, but the fact that this can quickly become very expensive. At the point where developers are not able to run the application locally anymore, the required compute resources for the application are large enough to represent significant cost in a cloud environment. &lt;/p&gt;

&lt;p&gt;There are ways to manage cost even in such cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Downscaling, hibernating and removing unused resources automatically and swiftly,&lt;/li&gt;
&lt;li&gt;Leaving as much locally as possible, for example by deploying non-shareable services locally and providing only a smaller subset of shareable services remotely,&lt;/li&gt;
&lt;li&gt;Buying hardware instead of renting cloud computation: Running a local dev cluster in your office can be a lot cheaper than renting the same computation in the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this requires deployment automation to already be in place so that using and managing the remote compute resources efficiently is doable for developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  (3) Making services shareable so that remote compute becomes affordable
&lt;/h2&gt;

&lt;p&gt;As described in &lt;a href="https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-ii-why-backwards-compatibility-is-overrated-1cd9"&gt;part two&lt;/a&gt;, the best way for long-term cost efficiency while still providing developers with the ability to validate their work is to work on the shareability of your services. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;There are two constraints for developers working on complex microservices architectures: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Factorial complexity of running the software&lt;/li&gt;
&lt;li&gt;Limited local computing resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both problems increase with the age and complexity of the software that is being developed. For small and medium sized software companies with software of medium size and complexity, this leads to a lot of developer’s time (10-25%) being spent on managing local deployments. For larger software companies with complex and large software products, this often means that developers are simply not able to validate their changes to the code before they push them to a shared repository. The result is costly quality issues in the software.&lt;/p&gt;

&lt;p&gt;To solve this, both constraints have to be addressed. &lt;/p&gt;

&lt;p&gt;Factorial complexity can be solved with dynamic configuration management and modular automation. &lt;/p&gt;

&lt;p&gt;Dynamic configuration management allows to automatically create and maintain configurations for complex, multi-faceted constellations of dependencies and constraints. Dependencies and constraints can be formulated once, and configurations can then be created automatically based on the defined rules.&lt;/p&gt;

&lt;p&gt;Modular automation means automating each individual step to create one bit of a deployment atomically, so that they can be combined dynamically to create different deployments. &lt;/p&gt;

&lt;p&gt;Our products, Cloudomation Engine &amp;amp; DevStack, are built to do precisely this: taking existing scripts and configuration files, extracting a model of your deployments from them that clearly shows dependencies and constraints, which can then be extended and adapted as needed. Automatic creation of these deployments can be done by initially reusing as much of existing automation as possible, while iteratively moving towards modular automation that is maintainable and usable long-term.&lt;/p&gt;

&lt;p&gt;The second problem, limited local computing resources, can be addressed in the time tested way of “throwing resources at the problem”: By simply buying the required computation from cloud providers. Depending on the resource requirements of the software, this can be prohibitively expensive. To solve this, micro services need to become shareable, i.e. multi-tenant capable. In order to get your software to this point, it is likely that you will have to let go of backwards compatibility and make some fundamental changes to your micro services architecture. Fortunately, this will pay off also by vastly increasing the scalability of your software, reducing cost in production, as well as by reducing complexity in development, making your developers faster and allowing you to bring new features to market more quickly. Mid- to long-term, it will most likely also increase the quality of your software. As shown in &lt;a href="https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-ii-why-backwards-compatibility-is-overrated-1cd9"&gt;an example in part II&lt;/a&gt;, backwards compatibility provides much smaller opportunities to save cost than service shareability anyway, making the case clear to invest in shareability much rather than backwards compatibility. &lt;/p&gt;

&lt;p&gt;If you like our content, please consider following us on &lt;a href="https://www.linkedin.com/company/starflows"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://twitter.com/cloudomationcom"&gt;X&lt;/a&gt; or subscribe to our &lt;a href="https://cloudomation.com/en/newsletter/"&gt;newsletter&lt;/a&gt; :) &lt;br&gt;
Thank you!&lt;/p&gt;

</description>
      <category>devex</category>
      <category>cde</category>
    </item>
    <item>
      <title>How to work with shared dev clusters (and why) - Part II: Why backwards compatibility is overrated</title>
      <dc:creator>Margot Mueckstein</dc:creator>
      <pubDate>Thu, 18 Apr 2024 08:18:51 +0000</pubDate>
      <link>https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-ii-why-backwards-compatibility-is-overrated-1cd9</link>
      <guid>https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-ii-why-backwards-compatibility-is-overrated-1cd9</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-i-stop-laptops-from-burning-5c9j"&gt;In part I of this series&lt;/a&gt; I wrote about the challenges when developers who work on Kubernetes-based applications try to run all services locally: Connecting local to remote services, cluster sharing, multi-tenant capabilities of remote services, versioning and managing compatibility of services. You also read about factorial complexities and why the management of version compatibility is tedious when several services introduce breaking changes or you need to be able to support different versions of your software. With each dimension of variability, the possible number of constellations in which your software can exist grows very quickly. Each factor that means one service is incompatible with another service would mean that that specific service has to exist in two (or several) different configurations. Each factor that means one service cannot be accessed by several other services means that it would have to exist once for each possible constellation of other services.&lt;/p&gt;

&lt;p&gt;Now in part II, to understand what this means, first let’s take a look at an example. Then I explain why you most certainly already have this kind of complexity and what are the real costs of running everything locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  An example
&lt;/h2&gt;

&lt;p&gt;Let’s consider an example to show how quickly factorials can grow. For the sake of simplicity, I will consider only whether or not services are shareable, and how many versions need to be supported. I assume that supported versions mean that none of the services are backwards compatible. It wouldn’t make much difference: even if one or two were backward compatible, it would only subtract one or two from the final number. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5suzcp9j7rmpsova985f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5suzcp9j7rmpsova985f.png" alt="Table showing increasing number of services required" width="630" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;* Depending on the performance of each service, several instances of each service may need to exist in order to service 100 developers, but they can scale down to this minimum number when there is low load. &lt;/p&gt;

&lt;h2&gt;
  
  
  Don’t be afraid of breaking changes
&lt;/h2&gt;

&lt;p&gt;What this example neatly shows is that version compatibility is really the lesser of the two problems. Each additional version you need to support only adds one multiple to the number of services (6 in our example). In this example, supporting five different versions would require a still manageable number of at least 30 services to support each version. Even if half of the services were backwards compatible and could be shared between different versions, this would reduce the final number only to 21 (down from 30), showing that the value of backwards compatibility really isn’t that great.&lt;/p&gt;

&lt;p&gt;The same principle applies to other factors besides versions that affect compatibility of services. &lt;/p&gt;

&lt;p&gt;On the other hand, each unshareable service adds one multiple to the number of developers (or customers), which is typically much larger than the number of services. Having only one service that is not shareable increases the number of minimum required services in our example by 100! &lt;/p&gt;

&lt;p&gt;Considering the huge cost that backwards compatibility introduces into development, it would be a lot more sensible to let go of backwards compatibility and focus on introducing multi-tenant capability to a larger number of services.&lt;/p&gt;

&lt;p&gt;Side note: This only looks at the complexities in development. Introducing breaking changes can have other consequences in production, such as customers needing to change their processes and integrations if a breaking change is introduced to the API. However, here I talk about breaking changes to the inter-services communication and not to external APIs (can be the same, can be different).&lt;/p&gt;

&lt;h2&gt;
  
  
  You are not adding complexity, you are just moving it
&lt;/h2&gt;

&lt;p&gt;You’d be forgiven for thinking that you don’t want this kind of complexity in your setup. The problem is: you most likely already have it. Only that right now, it is managed by individual developers. Each of them needs to figure out how to run their application locally while considering version compatibility and dealing with limited resources on their laptops. When every developer is expected to run everything locally, this by default results in a full setup per developer. Nothing can be shared. In our example, this would be the maximum number of 600 services - each developer would have to run the full 6 on their laptop.&lt;/p&gt;

&lt;p&gt;At the level of the individual developer, this kind of pain is often less noticeable because it is not a separate cost center. It is, however, real cost that many companies pay for daily - in time lost and frustration gained for their developers, and often in decreased quality of the software and production outages that are the consequences of not equipping developers with the tools they need to do their job well.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real cost of running everything locally
&lt;/h2&gt;

&lt;p&gt;Talking to a lot of developers, I heard about three scenarios how this can look like in reality:&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 1: everything works well
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KsrXEoSw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExaDNuM29nMDZ3eDNqNXFlcXNnc2J1MXZuODg2djF2a3RjZTBjN3VyYiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/lS0uOmv9Mg63EWYDHF/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KsrXEoSw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExaDNuM29nMDZ3eDNqNXFlcXNnc2J1MXZuODg2djF2a3RjZTBjN3VyYiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/lS0uOmv9Mg63EWYDHF/giphy.gif" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each developer knows how to run all services locally. This is the case for a miniscule minority of developer teams. I heard it only from teams that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consist of a maximum of 5 developers&lt;/li&gt;
&lt;li&gt;All developers are highly skilled and at senior level&lt;/li&gt;
&lt;li&gt;The number of services in the software is no more than 3&lt;/li&gt;
&lt;li&gt;A lot has been invested in documenting and automating the local setup and build process&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scenario 2: it works, but is painful
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WJAK2Nmd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExaXZ0bjF6eDh1dmswbzB5cHdvdHZxdDh5bW82ZWM2ZDRqOW54bmVxMiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/WxDZ77xhPXf3i/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WJAK2Nmd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExaXZ0bjF6eDh1dmswbzB5cHdvdHZxdDh5bW82ZWM2ZDRqOW54bmVxMiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/WxDZ77xhPXf3i/giphy.gif" width="432" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the most common scenario I found with small software companies. They typically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have less than 100 developers (usually 10-30)&lt;/li&gt;
&lt;li&gt;Have a handful of highly skilled senior developers who spend a lot of their time building tools (i.e. writing scripts) for local deployment and local build and supporting more junior developers with their local deployment and local build&lt;/li&gt;
&lt;li&gt;Have few services (3-6) &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scenario 3: it doesn’t work
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---u8JXf2c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExa2IzOXl5emV3NXR6ZzhsOHZrcmU2NWQzeXlwbnJ0NGZwaTFmbnEwMSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/LpB6Bqzato3VDg4dF7/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---u8JXf2c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExa2IzOXl5emV3NXR6ZzhsOHZrcmU2NWQzeXlwbnJ0NGZwaTFmbnEwMSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/LpB6Bqzato3VDg4dF7/giphy.gif" width="480" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the case with the vast majority of medium to large software companies that I talked to. They typically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have more than 100 developers&lt;/li&gt;
&lt;li&gt;Have more than 5 services (often several dozen or hundreds)&lt;/li&gt;
&lt;li&gt;Are not able to run the full application locally at all because their laptops simply don’t have enough computing power&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means developers work blindly. They write code and push it directly to staging. They are not able to validate and test locally what they are doing (beyond unit tests).&lt;/p&gt;

&lt;p&gt;This often results in &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Many bugs in the software, which leads to&lt;/li&gt;
&lt;li&gt;High investment in testing&lt;/li&gt;
&lt;li&gt;High investment in “troubleshooting capabilities” such as automated rollbacks&lt;/li&gt;
&lt;li&gt;Production outages, unhappy customers, penalties …&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What does this mean?
&lt;/h2&gt;

&lt;p&gt;I can only extrapolate from my sample of developers that I talked to. Doing this, I estimate that no more than 1% of companies are really successful with a setup where every developer is expected to run all services locally, i.e. scenario 1.&lt;/p&gt;

&lt;p&gt;It seems that, as companies grow and age, their software becomes more complex. This leads to a rapid increase in factors that can vary in local deployment - factorial complexity. &lt;br&gt;
This leads to scenario 2, where local deployment eats up developer’s time. In all teams that I talked to, there were several people that simply were not able to handle local deployment at all. &lt;a href="https://cloudomation.com/en/cloudomation-blog/who-benefits-the-most-from-using-cdes/"&gt;Across all seniority levels, the amount of time spent either managing their own local deployment, or supporting colleagues with their local deployment, varied between 10-25% of developer’s time&lt;/a&gt;. The largest amount of time was spent by the most junior and the most senior developers - juniors needed a lot of help, and seniors provided a lot of help. Becoming a senior meant becoming an expert in deployment, such as writing helm charts or troubleshooting minikube clusters. &lt;/p&gt;

&lt;p&gt;But the real bottleneck is computing resources that are available locally. In scenario 3, this was the most commonly cited issue. Even with investing a lot of time, developers were simply not able to run the full application locally. In this (worryingly common) case that a majority of developers are not at all able to run the application they work on locally, &lt;a href="https://cloudomation.com/en/cloudomation-blog/the-problem-with-developing-blindly/"&gt;developers are stuck with working blindly&lt;/a&gt;. As mentioned earlier, this leads to quality issues down the road that are incredibly difficult and expensive to fix. &lt;/p&gt;

&lt;p&gt;What is the solution? This is the topic for part 3 of this series. I’ll explain what is working for many developers and how to get there.&lt;/p&gt;

&lt;p&gt;If you like our content, please consider following us on &lt;a href="https://www.linkedin.com/company/starflows"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://twitter.com/cloudomationcom"&gt;X&lt;/a&gt; or subscribe to our &lt;a href="https://cloudomation.com/en/newsletter/"&gt;newsletter&lt;/a&gt; :) &lt;br&gt;
Thank you!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to work with shared dev clusters (and why) - Part I: Stop laptops from burning</title>
      <dc:creator>Margot Mueckstein</dc:creator>
      <pubDate>Wed, 10 Apr 2024 08:48:15 +0000</pubDate>
      <link>https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-i-stop-laptops-from-burning-5c9j</link>
      <guid>https://dev.to/makky/how-to-work-with-shared-dev-clusters-and-why-part-i-stop-laptops-from-burning-5c9j</guid>
      <description>&lt;p&gt;This is the first part of a 3 part article series. In this post, you will learn about the challenges when running a few services on the laptop of developers and others remotely. &lt;/p&gt;

&lt;p&gt;One common pain point faced by developers who work on complex Kubernetes-based applications is that their laptops burn up when they try to run all services locally. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdneomisxbuwt688wkwpc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdneomisxbuwt688wkwpc.png" alt="Burning laptop" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To deal with this, a setup has become common where developers only run one or two services locally, and connect to the other services on a remote cluster. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj97b93being2ki9lvpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj97b93being2ki9lvpx.png" alt="One service locally, others on a remote server" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This has the advantage of&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allowing developers to build, run, and inspect those components locally that they work on&lt;/li&gt;
&lt;li&gt;Without overloading their laptops with running many services locally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, while it is a great idea, there are challenges to this setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenge 1: Connecting local to remote services
&lt;/h2&gt;

&lt;p&gt;It is possible to configure services in Kubernetes to connect to each other across network boundaries, e.g. via port-forwarding (&lt;a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward"&gt;docs&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/"&gt;example&lt;/a&gt;) - however, doing this manually can be a pain. &lt;/p&gt;

&lt;p&gt;Fortunately, it has been made a lot easier by tools that were specifically developed to support this use case. &lt;a href="https://www.telepresence.io/"&gt;Telepresence&lt;/a&gt; is a tool built specifically to connect Kubernetes pods across networks. It &lt;a href="https://www.getambassador.io/docs/telepresence-oss/latest/howtos/cluster-in-vm"&gt;creates a virtual network interface that maps the cluster's subnets to the host machine when it connects&lt;/a&gt;. &lt;a href="https://mirrord.dev/"&gt;Mirrord&lt;/a&gt; uses &lt;a href="https://mirrord.dev/docs/overview/introduction/"&gt;a different approach&lt;/a&gt;, but with the same outcome: it makes it manageable to connect individual services running on different machines. &lt;/p&gt;

&lt;h2&gt;
  
  
  Challenge 2: Cluster sharing
&lt;/h2&gt;

&lt;p&gt;Getting the services to talk to each other is just one (complicated) part of the (even more complicated) entire puzzle. It makes it possible for one developer to connect their local service to one remote cluster that runs other services. That’s nice, but the general idea is to have one remote dev cluster that all devs can use and connect to. Otherwise, you’d have to run a dedicated dev cluster for each developer, which is expensive (and a pain to maintain). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwsunweu3mkhxlh0i6g2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwsunweu3mkhxlh0i6g2.png" alt="One cluster for each dev" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 2a: Multi-tenant capabilities of remote services
&lt;/h3&gt;

&lt;p&gt;If you want to have one dev cluster that many developers can share, your software needs to be able to support that. Specifically, each individual service needs to be multi-tenant capable, so that several other services can use it.&lt;/p&gt;

&lt;p&gt;Multi-tenant capable services have awareness of which other services they are talking to, and which tenant (i.e. customer) these other services belong to. This must be implemented in a way that ensures that services do not leak data between tenants. For example, a statistics component must know which data belongs to which tenant in order to produce the right response about the right data for the right other service.&lt;/p&gt;

&lt;p&gt;Even though it is best practice to implement microservices in exactly this way, it is often not the case. Most commonly, the complexities of adding tenant separation outweigh the perceived immediate benefits, leading to some or all services not being multi-tenant capable. &lt;/p&gt;

&lt;p&gt;Services that are not multi-tenant-capable cannot be shared. This means that each developer would need a dedicated instance of each such un-shareable service. If only some services can’t be shared, it could conceivably be possible to share some services, and deploy other services dedicated for each developer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gdkt59vaq7i2e840v4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gdkt59vaq7i2e840v4i.png" alt="Unshareable services" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This, however, is a pain to configure and manage and provides only part of the cost savings of sharing services, since a lot of services would still need to exist in several instances. &lt;br&gt;
Getting to a point where a setup like this is sufficiently automated to be usable is worth the effort, though, because it enables iterative improvements where multi-tenant-support can be added to individual services step by step, incrementally increasing the cost savings of sharing services. &lt;/p&gt;

&lt;p&gt;Side note: Multi-tenancy is beneficial not only for sharing services in development, but also for efficient scaling in production. The case is exactly the same in production as it is in development: any service that cannot be shared will exist as a dedicated service for each customer. They can be scaled up (i.e. several instances of that service can exist for one customer) but they cannot be scaled down (i.e. at least one instance of that service has to exist for each customer). This reduces the benefits of a microservices architecture significantly while also significantly increasing the complexity of managing it. If only some services can be shared and others can’t be shared, scaling must consider this, making it a much more complex undertaking. &lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 2b: Versioning &amp;amp; managing compatibility of services
&lt;/h3&gt;

&lt;p&gt;The next whopper is compatibility of services. If you have different teams working on different components, introducing a breaking change in one of the components would require either &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;that both versions of that component are available on the dev cluster for other services to connect to - which requires version awareness, i.e. each service would need to be aware of its own version and which other version(s) of other services it is compatible with. Which, again, is best practice but in reality, a lot of software isn’t built for this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzfprmbiwqsvbndcsnzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzfprmbiwqsvbndcsnzx.png" alt="Shareable services" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;or you need separate dev clusters for different versions and each developer needs to know which cluster to develop against - which is easier to manage, but you’d need an additional cluster for each version that developers work on. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeoqva6cc85ipfv64ka5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeoqva6cc85ipfv64ka5.png" alt="Several clusters" width="800" height="709"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One way many companies choose to deal with compatibility issues is to avoid breaking changes at all cost and prioritizing backwards compatibility over most other aspects (such as technical excellence or user experience). &lt;/p&gt;

&lt;p&gt;Unfortunately, this is often a way to move problems into other areas rather than solving them. Constraining the ability to introduce breaking changes leads to an accumulation of technical debt which makes every single additional change to the software more complex and costly. &lt;br&gt;
This can become a cat-bites-its-own-tail situation: backwards compatibility with older versions that do not have multi-tenant capabilities makes it impossible to introduce multi-tenant capabilities, since those require fundamental changes to the structure and content of API requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Factorial complexities
&lt;/h2&gt;

&lt;p&gt;Managing version compatibility becomes real fun when several services introduce breaking changes, or you need to be able to support (i.e. bug fix and develop in) several different versions of your software. &lt;/p&gt;

&lt;p&gt;Considering each service's version compatibility and shareability, you already end up with a large number of possible constellations that you’d need to support, even when considering only those two factors. &lt;/p&gt;

&lt;p&gt;In reality, there are many more factors that need to be considered, for example&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dependencies like libraries in specific versions, &lt;/li&gt;
&lt;li&gt;operating system compatibility, &lt;/li&gt;
&lt;li&gt;different possible constellations of services (e.g. different DB backends or other constellations that your software might support)&lt;/li&gt;
&lt;li&gt;other service-interdependence specific to your software&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With each dimension of variability, the possible number of constellations in which your software can exist grows very quickly. Fortunately, all factors that affect compatibility of services with each other are much less critical than factors that affect if one service can be used by several other services at all, i.e. shareability. Each factor that means one service is incompatible with another service would mean that that specific service has to exist in two (or several) different configurations. Each factor that means one service cannot be accessed by several other services means that it would have to exist once for each possible constellation of other services. &lt;/p&gt;

&lt;p&gt;That’s the end of the first part of this series. In the second part we dive deeper into this topic. I’ll show you an example of how quickly factorials can grow when services are not sharable and what are the real costs of running everything locally.&lt;/p&gt;

&lt;p&gt;I’m Margot Mueckstein, CEO and co-founder of &lt;a href="//cloudomation.com"&gt;Cloudomation&lt;/a&gt;, a software startup with the vision to improve developer experience through good tooling and dev-focused automation. We’ve been building a &lt;a href="https://cloudomation.com/en/cloud-development-environments/"&gt;Cloud Development Environments&lt;/a&gt; product since 2023. Check it out!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How CDEs work - no bs blog post</title>
      <dc:creator>Margot Mueckstein</dc:creator>
      <pubDate>Tue, 09 Jan 2024 11:53:43 +0000</pubDate>
      <link>https://dev.to/makky/how-cdes-work-no-bs-blog-post-34ek</link>
      <guid>https://dev.to/makky/how-cdes-work-no-bs-blog-post-34ek</guid>
      <description>&lt;p&gt;CDEs are a new product category. A lot of new CDE products were announced in 2022 and 2023, and CDEs were included in &lt;a href="https://www.gartner.com/en/articles/what-s-new-in-the-2023-gartner-hype-cycle-for-emerging-technologies"&gt;Gartner’s Hype Cycle of emerging technologies&lt;/a&gt; for the first time in 2023. &lt;/p&gt;

&lt;p&gt;As a result, a lot of content is being published about CDEs - however most content focused on the benefits of CDEs and either doesn’t really explain how CDEs work, or only touches on the subject superficially. In this blog post, I want to provide a no-bullshit overview of the architecture of CDEs and common patterns on how CDE products work. &lt;/p&gt;

&lt;h1&gt;
  
  
  What are Cloud Development Environments (CDEs)?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://cloudomation.com/en/cloud-development-environments/?pk_campaign=ext&amp;amp;pk_source=devto&amp;amp;pk_medium=guest&amp;amp;pk_content=how_cdes_work"&gt;CDEs&lt;/a&gt; are work environments for software developers that contain all tools that software developers need to do their work. They are provided remotely, often in the cloud, though they can also be provided on-premise, within the internal network of a company.&lt;/p&gt;

&lt;h1&gt;
  
  
  How do CDEs work?
&lt;/h1&gt;

&lt;p&gt;Despite being a new product category, first standards are emerging of how CDEs commonly look like. The below diagram shows the very basic layout of a standard CDE.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L7XQRDKB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b9opxny4sfeoo0ziuy1p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L7XQRDKB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b9opxny4sfeoo0ziuy1p.jpg" alt="standard CDE setup" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Typically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;An IDE thin client of an ssh-capable IDE&lt;/strong&gt; such as VS Code or the JetBrains IDEs (e.g. IntelliJ) remains on the developer’s laptop. This means that most CDEs do not fully remove the need for local tooling or make a developer fully independent to work from any device: usually at least the IDE client needs to run locally, and be configured to connect to the IDE backend in the CDE via ssh.&lt;/li&gt;
&lt;li&gt;The CDE contains a &lt;strong&gt;copy of the source code repository&lt;/strong&gt;. By accessing the source code via the IDE backend remotely, source code security is ostensibly improved, however remote IDEs might still cache source code locally. &lt;/li&gt;
&lt;li&gt;Any &lt;strong&gt;other tools&lt;/strong&gt; that are required for development also run in the CDE: language runtimes, SDKs, linters, etc. Users can configure which tools they want on their CDE. 

&lt;ul&gt;
&lt;li&gt;Two standards for CDE configuration exist: &lt;a href="https://devfile.io/"&gt;devfile.yml&lt;/a&gt; and &lt;a href="https://containers.dev/"&gt;devcontainer.json&lt;/a&gt;. Both assume that the CDE is a single container and allow specification of which tools should be deployed to this container, as well as a reference to scripts that should run after the container has been created.&lt;/li&gt;
&lt;li&gt;Not all CDE products use these standards, many have custom configuration schemata and/or allow configuration using other tools and standards such as Dockerfiles or Terraform configuration. &lt;/li&gt;
&lt;li&gt;Other CDE products do not use containers as CDEs but VMs. These tools mostly use a proprietary configuration format, often in combination with tools like Docker Compose.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Which software can run on a CDE?
&lt;/h1&gt;

&lt;p&gt;Next to developer tools, developers need to run the software they work on within their development environments. CDEs differ in what type of software they can run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single-container CDEs&lt;/strong&gt; are the most common. These allow you to run any software that can sensibly run within a single container. For example, a yarn project with a webserver can easily run in a container - for development of such a project, a single-container CDE is suitable. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-container CDEs&lt;/strong&gt; are ones that deploy several containers to Kubernetes, OpenShift, or just with Docker. They assume that the application that developers work on consists of several containers. The CDE container is deployed alongside the application containers. Such a CDE is suitable for developers working on Kubernetes-based applications with several containerised components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VM CDEs&lt;/strong&gt; do not confine the developer to a container but give them full access to a VM where the developer can deploy whatever they want. This also makes it possible to deploy multi-container applications, with the difference that the developer tools are directly on the VM, removing one layer of separation between the developer and the containerised application. VM CDEs also allow the use of existing deployment or local build scripts that assume a VM-based environment. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The choice of CDE should be based on the production deployment of the software that is being developed. If the software runs in a single container in production, then a single container CDE is the best choice. If the software runs in Kubernetes (or similar) in production, then a multi-container CDE should be used. If the software runs on a VM in production, then a VM CDE is the best option.&lt;/p&gt;

&lt;p&gt;A special case are &lt;strong&gt;Remote Desktop CDEs&lt;/strong&gt;. Microsoft Dev Box is currently the only vendor that provides remote desktop environments specifically as software development environments. For working on desktop software that requires a Windows environment, or for fat clients without a web frontend, remote desktop CDEs might be the only option.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is the difference between a CDE and just any container or VM with developer tooling?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://devfile.io/"&gt;Devfile.yml&lt;/a&gt; and &lt;a href="https://containers.dev/"&gt;devcontainer.json&lt;/a&gt; are open standards for defining (single container) development environments. So what is the added benefit of paying for a CDE product?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CDE products usually provide:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automation to create CDEs based on configuration files. This usually includes automatic creation of the infrastructure as well as deployment of the CDEs themselves. &lt;/li&gt;
&lt;li&gt;A management layer where users can create, start, stop, remove and monitor CDEs, often with additional administrative functionality like CDE access management etc. This management layer is usually available as a web portal and/or a command line interface (CLI). &lt;/li&gt;
&lt;li&gt;Infrastructure where the CDEs run - at least for SaaS offerings. On-premise CDEs come with a clear concept of where and how to run CDEs and automation for this infrastructure. &lt;/li&gt;
&lt;li&gt;Templates for CDEs with common toolstacks, as well as examples and documentation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many CDE products offer additional special features such as ultra-fast CDE creation, or automatic prebuild of CDEs on each commit, or special security and insight features, or they are bundled with other products that make software development easier - but those special features are specific to each individual CDE product. Generally, what a CDE product does is: it makes it as easy as possible to create, use and manage CDEs. &lt;/p&gt;

&lt;h1&gt;
  
  
  What CDEs are not
&lt;/h1&gt;

&lt;p&gt;CDEs do not replace a CI/CD pipeline which deploys your application into production and runs a full set of tests. Some of the same tools might be used, such as Docker Compose or Terraform, however the point of CDEs and the automation behind them is to deploy your application next to or within development environments. &lt;strong&gt;Most importantly, CDEs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Contain the full source code in plain text, i.e. it is neither compiled nor minimised or packaged&lt;/li&gt;
&lt;li&gt;Contain developer tools which are not needed in production, such as compilers, SDKs, debuggers, OS utilities etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, CDEs do not replace developer laptops. Developers who use CDEs will need less beefy laptops, however most CDEs still assume that at least an IDE client still runs locally. There are some CDEs that make it possible to work only with a browser, with the IDE also being served in-browser. However not all CDEs support this, and working fully in-browser represents a significant change to how developers work locally. &lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;CDEs are work environments for software developers that run on remote servers. &lt;/li&gt;
&lt;li&gt;While all CDEs are slightly different, the most common CDE setup involves an IDE client being installed on a developer’s laptop, which connects via ssh to an IDE backend on a CDE that is either a container or a VM. Besides the IDE backend, the CDE contains the source code as well as developer tools such as language runtimes and SDKs, build tools, linters etc. &lt;/li&gt;
&lt;li&gt;Major differences between CDE products are related to their core architecture: Single-container, multi-container, VM and remote desktop CDEs are compatible with different software development projects. &lt;/li&gt;
&lt;li&gt;CDEs don’t replace CI/CD pipelines, and also don’t make developer laptops obsolete. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a deep dive into different CDE products, their features and differences, read my whitepaper “&lt;a href="https://cloudomation.com/en/whitepaper-en/cde-vendors-feature-comparison/?pk_campaign=ext&amp;amp;pk_source=devto&amp;amp;pk_medium=guest&amp;amp;pk_content=how_cdes_work"&gt;Full list of CDE vendors 2024 (+feature comparison table)&lt;/a&gt;”.&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>cde</category>
      <category>clouddevelopmentenvironments</category>
      <category>devex</category>
    </item>
    <item>
      <title>Developer tooling still sucks.</title>
      <dc:creator>Margot Mueckstein</dc:creator>
      <pubDate>Fri, 20 Oct 2023 06:53:01 +0000</pubDate>
      <link>https://dev.to/makky/developer-tooling-still-sucks-3m6k</link>
      <guid>https://dev.to/makky/developer-tooling-still-sucks-3m6k</guid>
      <description>&lt;p&gt;Developer tooling has come a long, long way in the past couple of decades. Syntax highlighting, linting, beautiful and easy-to-use IDEs with debugging features build-in, hot reloads, automated tests and lots and lots of other ideas and tools have made the life of a software developer a lot easier, and the field of software development more accessible.&lt;/p&gt;

&lt;p&gt;But still, large differences in productivity among developers exist, and a large part of that is related to tooling. Developers spend a lot of time taking care of their tools, troubleshooting their setups, and going down rabbit holes of configuration and dependency management. And there are lifetimes spent waiting: waiting for builds or tests to finish. &lt;/p&gt;

&lt;p&gt;This begs the question: what can we do about it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do developers still spend so much time not writing code?
&lt;/h2&gt;

&lt;p&gt;It all goes back to the saying “complexity kills productivity”. Software development is a complex undertaking, and developers are expected to deal with a lot of the complexity themselves. One example: many developers are expected to run the applications they develop on their own workstations. Depending on the complexity of the application, the tech stack used (which relates directly to the maturity of available tools), as well as the quality of scripts (which are usually developed in-house) to build and deploy the application locally, this can be very complex. Which, you guessed it, kills productivity.&lt;/p&gt;

&lt;p&gt;Developers are expected to master a wide range of technologies which are not directly related to software development. In simple cases, it could “just” be docker: if the developer is lucky enough to be working on a fully containerised application, knowledge of docker is likely to be a prerequisite to be able to run the application locally.&lt;/p&gt;

&lt;p&gt;But Docker is a complex tool. &lt;/p&gt;

&lt;p&gt;Many developers would have a lot more time for writing code if they did not have to learn, use, and troubleshoot docker. &lt;/p&gt;

&lt;p&gt;This is not a rant about docker: it's a great technology and I’m a big fan. I just use it as an example of the many different tools developers are expected to know in order to do their job.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can we do about it?
&lt;/h2&gt;

&lt;p&gt;Honestly, I think a big part would be a shift in culture, which would lower expectations towards developers. A mindset in which developers, much like all other business functions, are equipped with tools which enable them to do their job without having to deal with the underlying complexity of the enabling technology. Nobody expects an HR person to know a lot about IT, after all: they are equipped with software that enables them to do their job without knowing about the underlying cloud infrastructure that hosts the HR software, or the containers in which it runs.&lt;/p&gt;

&lt;p&gt;The problem with this is that developers build products that other people need to be able to run and use. DevOps was a whole movement that preached the exact opposite: bringing operations and developers closer together, lowering boundaries, and building expertise in both development and operations. The perfect developer would also have deep expertise in operations, and the perfect operator would have deep expertise in development, they would work in cross-functional teams and everybody would do all the jobs, basically. &lt;/p&gt;

&lt;p&gt;This sounds again like a rant :) &lt;/p&gt;

&lt;p&gt;DevOps is great. It just doesn’t work. What we have seen is that in most organizations, DevOps teams were formed which operated in much the same siloes that operations teams used to work in before, and nothing much really changed.&lt;/p&gt;

&lt;p&gt;For some, DevOps works, because they embraced not just the job description but the mindset behind it. This can lead to highly productive engineering teams.&lt;/p&gt;

&lt;p&gt;However I think that this is something that is only accessible to very experienced, very smart and top-performing experts who can absorb and apply the huge breadth of knowledge required to be an expert in both operations and development. &lt;/p&gt;

&lt;p&gt;For the rest of the world, I think a different kind of mindset is required. I also think that it is already on the horizon. &lt;/p&gt;

&lt;h2&gt;
  
  
  Platform engineering as a way to remove complexity from software development
&lt;/h2&gt;

&lt;p&gt;Platform engineering is often called the evolution of DevOps. I think it isn’t really an evolution but a completely different approach. It focuses on providing self-service tools to developers that enable them to do their job - i.e. write code - without having to bother with all the complexity of developer tooling. &lt;/p&gt;

&lt;p&gt;The biggest change in mindset in this approach is that developers are no longer expected to do everything themselves. Platform engineers are seen as a support function of the software development teams and they are themselves specialists in platform engineering, not software engineering. &lt;/p&gt;

&lt;p&gt;They have to understand the work of developers to the same degree as software developers should understand the users of the software they develop. But there is no more talk of cross-functionality and teams consisting exclusively of all-knowing unicorns. &lt;/p&gt;

&lt;p&gt;It doesn’t even need a full-blown platform engineering team to start moving in this direction. Recently, there has been an explosion in interest and products for &lt;a href="https://cloudomation.com/en/cloud-development-environments/"&gt;Cloud Development Environments&lt;/a&gt; (CDEs), which provide ready-to-go work environments for developers. Just not having to troubleshoot local development environments is already a big step in the right direction by removing complexity and hassle from developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Division of labor and specialization are forever in conflict with collaboration and cross-functional expertise that tends to nurture both innovation and quality. Neither extreme is a good idea: neither having people develop deep expertise in tiny areas, nor having people know a little bit about everything will produce good results. &lt;/p&gt;

&lt;p&gt;The problem we face as engineering leaders is to know where to set the boundaries, and how much to expect of our people. I believe that collaboration between teams with clearly described functions and clear boundaries leads to the best outcomes. &lt;/p&gt;

&lt;p&gt;Platform engineering teams supporting software developers with self-service tools seems to me like a setup that could work: it empowers developers while reducing the mental load on them. It also creates a rewarding and interesting job function: platform engineering might be one of the most complex and yet rewarding job profiles out there. It requires working with cutting edge technology and building products that the highly discerning user group of developers likes to use. &lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>tooling</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>Why generative AI could mean worse software 🤯</title>
      <dc:creator>Margot Mueckstein</dc:creator>
      <pubDate>Mon, 04 Sep 2023 15:07:21 +0000</pubDate>
      <link>https://dev.to/makky/why-generative-ai-could-mean-worse-software-4h4g</link>
      <guid>https://dev.to/makky/why-generative-ai-could-mean-worse-software-4h4g</guid>
      <description>&lt;p&gt;There has been popular worry about generative AI replacing jobs. Software engineers were also said to be on the list: producing code is similar to producing language, and while generative AI is not yet very good at producing working larger bodies of code, this would only be a matter of time. Or so the thinking goes. &lt;/p&gt;

&lt;p&gt;(So also goes my thinking: I do think that the next iteration of LLMs will be able to produce significantly better code than previous iterations, and I’m looking forward to it.)&lt;/p&gt;

&lt;p&gt;I don’t know if it really is just a matter of time until generative AI can write working code that is more than a few lines long. Whether or not it comes to be, it is a scenario worth thinking about.  &lt;/p&gt;

&lt;h2&gt;
  
  
  The problem: AI and code inflation
&lt;/h2&gt;

&lt;p&gt;Generative AI writing code doesn’t mean that software engineers will be out of a job, because only part of a software engineer’s job is to write code. A much larger part of their job is to decide which code to write, and how.&lt;/p&gt;

&lt;p&gt;An engineer’s job consists of making a myriad of decisions, some smaller, some larger. Many of them concern the architecture of software: how generic a function or component should be, or how specific and custom it needs to be. How to handle data across the application. How to ensure performance and security on top of functionality. How to expose functionality to the user. And so on.&lt;/p&gt;

&lt;p&gt;Having an AI assistant that helps an engineer produce more code more quickly will probably also mean that less time is spent on thinking about these decisions.&lt;/p&gt;

&lt;p&gt;It also means that more code will be produced, which is not necessarily a good thing. It will create demand for all the surrounding functions that already now are often understaffed. Quality assurance and security are two that come to mind. User experience design is another.&lt;/p&gt;

&lt;p&gt;If it were the case that more productive software engineers means less of them are needed and budget is freed up that actually goes towards these surrounding functions, great.&lt;/p&gt;

&lt;p&gt;However, I think that is highly unlikely to happen. &lt;/p&gt;

&lt;h2&gt;
  
  
  The rise of throwaway-code
&lt;/h2&gt;

&lt;p&gt;Already now there is an unhealthy focus on producing more code, more features, new releases with new things, instead of improving quality and user experience. This balance will worsen further if code production becomes cheaper.&lt;/p&gt;

&lt;p&gt;It is similar to the horror scenario envisioned by some regarding marketing: with the cost of producing marketing content plummeting with the use of generative AI, there will be a lot more of it. A lot of it will also be garbage. (A lot of it already is garbage.) This creates noise, which sucks attention and energy, leads to equally rapidly plunging returns for content marketing, and will require a rethink of go to market strategies if content marketing gets “used up” and stops to work.&lt;/p&gt;

&lt;p&gt;Something similar might happen with code: when producing code becomes cheap, managing it well will become both more difficult and more valuable. &lt;/p&gt;

&lt;p&gt;Maybe this is the advent of throwaway-code: scripts written for a specific purpose that are not intended to last. They are used for as long as they work and are useful, and then they are simply replaced with a different script that fits new requirements, which might be produced from scratch every time. &lt;/p&gt;

&lt;p&gt;This might make code management or even quality assurance obsolete - at least to some degree. A plastic fork that you use once doesn’t need to be particularly sturdy or pleasant to look at: you use it once, then you throw it away. &lt;/p&gt;

&lt;p&gt;Throwaway software might be the same: unwieldy, inefficient, horrible from a technical standpoint, but if it gets you there cheaply, it might still come to dominate over software produced by humans which might be secure and pleasant to use, but much less flexible and a lot more expensive. &lt;/p&gt;

&lt;p&gt;Even security gaps might become less urgent if throwaway software is used for only a day or two: not enough time to find and exploit security vulnerabilities. If the next iteration is written from scratch, it might have different vulnerabilities. A new form of security through obscurity. &lt;/p&gt;

&lt;p&gt;However, generative AI writing software will coexist with generative AI hacking software. Also, since generative AI regurgitates what it reads “on the internet”, it is likely to reproduce common vulnerabilities that can be quickly identified and exploited.&lt;/p&gt;

&lt;h2&gt;
  
  
  One AI Is not enough
&lt;/h2&gt;

&lt;p&gt;The obvious answer is that generative AI being used widely for producing code will only happen once generative (or other types of) AI will also be able to fulfill the function of quality and security assurance and UX design as well. You might get a “team” of specialized AI models, similar to software development teams composed of humans. &lt;/p&gt;

&lt;p&gt;That seems to me to be several steps further away than generative AI models that produce working code. Writing code seems pretty close to what they are already good at. Designing good user interfaces, translating these designs into something another AI model can understand and implement, and then making sure that the resulting software is safe and bug-free is a whole other game with a much higher level of complexity than “just” figuring out how to write lines of code that do what a user asked for. &lt;/p&gt;

&lt;h2&gt;
  
  
  The hype is dead. Long live the hype!
&lt;/h2&gt;

&lt;p&gt;Looking back, AI and machine learning have gone through many hype cycles. This is the latest one. I’m excited and curious to see what impact it will have. &lt;/p&gt;

&lt;p&gt;Previous hype cycles fizzled out without much impact. The last one at least provided real and large benefits in some areas: content recommendation and image processing, for example. Due to the broad nature of generative AI’s capabilities, I’m convinced that the effects of this hype cycle will be significantly larger than the effects of the last one. However I’m also sure that it will not be fundamentally life changing. &lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
