<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Humanitec - Your Internal Developer Platform</title>
    <description>The latest articles on DEV Community by Humanitec - Your Internal Developer Platform (@humanitec_com).</description>
    <link>https://dev.to/humanitec_com</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/humanitec_com"/>
    <language>en</language>
    <item>
      <title>Infrastructure as Code: The Good, the Bad and the Future</title>
      <dc:creator>Humanitec - Your Internal Developer Platform</dc:creator>
      <pubDate>Wed, 14 Jul 2021 12:25:59 +0000</pubDate>
      <link>https://dev.to/humanitec_com/infrastructure-as-code-the-good-the-bad-and-the-future-6hd</link>
      <guid>https://dev.to/humanitec_com/infrastructure-as-code-the-good-the-bad-and-the-future-6hd</guid>
      <description>&lt;p&gt;Infrastructure as Code, or IaC for short, is a fundamental shift in software engineering and in the way Ops think about the provisioning and maintenance of infrastructure. Despite the fact that IaC has established itself as a de facto industry standard for the past few years, many still seem to disagree on its definition, best practices, and limitations.&lt;/p&gt;

&lt;p&gt;This article will walk through the evolution of this approach to infrastructure workflows and the related technologies that were born out of it. We will explain where IaC came from and where it is likely going, looking at both its benefits and key limitations.&lt;/p&gt;

&lt;p&gt;From Iron to Clouds&lt;br&gt;
Remember the Iron age of IT, when you actually bought your own servers and machines? Me neither. Seems quite crazy right now that infrastructure growth was limited by the hardware purchasing cycle. And since it would take weeks for a new server to arrive, there was little pressure to rapidly install and configure an operating system on it. People would simply slot a disc into the server and follow a checklist. A few days later it was available for developers to use. Again, crazy.&lt;/p&gt;

&lt;p&gt;With the simultaneous launch and widespread adoption of both AWS EC2 and Ruby on Rails 1.0 in 2006, many enterprise teams have found themselves dealing with scaling problems previously only experienced at massive multinational organizations. Cloud computing and the ability to effortlessly spin up new VM instances brought about a great deal of benefits for engineers and businesses, but it also meant they now had to babysit an ever-growing portfolio of servers.&lt;/p&gt;

&lt;p&gt;The infrastructure footprint of the average engineering organization became much bigger, as a handful of large machines were replaced by many smaller instances. Suddenly, there were a lot more things Ops needed to provision and maintain and this infrastructure tended to be cyclic. We might scale up to handle a load during a peak day, and then scale down at night to save on cost, because it's not a fixed item. Unlike owning depreciating hardware, we're now paying resources by the hour. So it made sense to only use the infrastructure you needed to fully benefit from a cloud setup.&lt;/p&gt;

&lt;p&gt;To leverage this flexibility, a new paradigm is required. Filing a thousand tickets every morning to spin up to our peak capacity and another thousand at night to spin back down, while manually managing all of this, clearly starts to become quite challenging. The question is then, how do we begin to operationalize this setup in a way that's reliable and robust, and not prone to human error?&lt;/p&gt;

&lt;p&gt;Webinar: Developer self-service on Kubernetes&lt;br&gt;
Infrastructure as Code&lt;br&gt;
Infrastructure as Code was born to answer these challenges in a codified way. IaC is the process of managing and provisioning data centers and servers through machine-readable definition files, rather than physical hardware configuration or human-configured tools. Now, instead of having to run a hundred different configuration files, IaC allows us to simply hit a script that every morning brings up a thousand machines and later in the evening automatically brings the infrastructure back down to whatever the appropriate evening size should be.&lt;/p&gt;

&lt;p&gt;Ever since the launch of AWS Cloudformation in 2009, IaC has quickly become an essential DevOps practice, indispensable to a competitively paced software delivery lifecycle. It enables engineering teams to rapidly create and version infrastructure in the same way they version source code and to track these versions to avoid inconsistency among IT environments. Typically, teams implement it as follows:&lt;/p&gt;

&lt;p&gt;Developers define and write the infrastructure specs in a language that is domain-specific&lt;br&gt;
The files that are created are sent to a management API, master server, or code repository&lt;br&gt;
An IaC tool such as Pulumi then takes all the necessary actions to create and configure the necessary computing resources&lt;br&gt;
And voilá, your infrastructure is suddenly working for you again instead of the other way around.&lt;/p&gt;

&lt;p&gt;There are traditionally two approaches to IaC, declarative or imperative, and two possible methods, push and pull. The declarative approach is about describing the eventual target and it defines the desired state of your resources. This approach answers the question of what needs to be created, e.g. “I need two virtual machines”. The imperative approach answers the question of how the infrastructure needs to be changed to achieve a specific goal, usually by a sequence of different commands. Ansible playbooks are an excellent example of an imperative approach. The difference between the push and pull method is simply around how the servers are told how to be configured. In the pull method, the server will pull its configuration from the controlling server, while in the push method the controlling server pushes the configuration to the destination system.&lt;/p&gt;

&lt;p&gt;The IaC tooling landscape has been in constant evolution over the past ten years and it would probably take up a whole other article to give a comprehensive overview of all the different options one has to implement this approach to her specific infrastructure. We have however compiled a quick timeline of the main tools, sorted by GA release date:&lt;/p&gt;

&lt;p&gt;AWS CloudFormation (Feb 2011)&lt;br&gt;
Ansible (Feb 2012)&lt;br&gt;
Azure Resource Manager (Apr 2014)&lt;br&gt;
Terraform (Jun 2014)&lt;br&gt;
GCP Cloud Deployment Manager (Jul 2015)&lt;br&gt;
Serverless Framework (Oct 2015)&lt;br&gt;
AWS Amplify (Nov 2018)&lt;br&gt;
Pulumi (Sep 2019)&lt;br&gt;
AWS Copilot (Jul 2020)&lt;br&gt;
This is an extremely dynamic vertical of the DevOps industry, with new tools and competitors popping up every year and old incumbents constantly innovating; CloudFormation for instance got a nice new feature just last year, Cloudformation modules.&lt;/p&gt;

&lt;p&gt;The good, the bad&lt;br&gt;
Thanks to such a strong competitive push to improve, IaC tools have time and again innovated to generate more value for the end-user. The largest benefits for teams using IaC can be clustered in a few key areas:&lt;/p&gt;

&lt;p&gt;Speed and cost reduction: IaC allows faster execution when configuring infrastructure and aims at providing visibility to help other teams across the enterprise work quickly and more efficiently. It frees up expensive resources to work on other value-adding activities.&lt;br&gt;
Scalability and standardization: IaC delivers stable environments, rapidly and at scale. Teams avoid manual configuration of environments and enforce consistency by representing the desired state of their environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by configuration drift or missing dependencies. IaC completely standardizes the setup of infrastructure so there is a reduced possibility of any errors or deviations.&lt;br&gt;
Security and documentation: If all compute, storage and networking services are provisioned with code, they also get deployed the same way every time. This means security standards can be easily and consistently enforced across companies. IaC also serves as a form of documentation of the proper way to instantiate infrastructure and insurance in the case employees leave your company with important knowledge. Because code can be version-controlled, IaC allows every change to your server configuration to be documented, logged and tracked.&lt;br&gt;
Disaster recovery: As the term suggests, this one is pretty important. IaC is an extremely efficient way to track your infrastructure and redeploy the last healthy state after a disruption or disaster of any kind happens. Like everyone who woke up at 4am because their site was down will tell you, the importance of quickly recovering after your infrastructure got messed up cannot be understated.&lt;br&gt;
There are more specific advantages to particular setups, but these are in general where we see IaC having the biggest impact on engineering teams’ workflows. And it’s far from trivial, introducing IaC as an approach to manage your infrastructure can be a crucial competitive edge. What many miss when discussing IaC however, are some of the important limitations that IaC still brings with it. If you have already implemented IaC at your organization or are in the process of doing so, you’ll know it’s not all roses like most blog posts about it will have you believe. For an illustrative (and hilarious) example of the hardships of implementing an IaC solution like Terraform, I highly recommend checking out The terrors and joys of terraform by Regis Wilson.&lt;/p&gt;

&lt;p&gt;In general, introducing IaC also implies four key limitations one should be aware of:&lt;/p&gt;

&lt;p&gt;Logic and conventions: Your developers still need to understand IaC scripts, and whether those are written in HashiCorp Configuration Language (HCL) or plain Python or Ruby, the problem is not so much the language as the specific logic and conventions they need to be confident applying. If even a relatively small part of your engineering team is not familiar with the declarative approach (we see this often in large enterprises with legacy systems e.g. .NET) or any other core IaC concepts, you will likely end up in a situation where Ops plus whoever does understand them becomes a bottleneck. If your setup requires everyone to understand these scripts in order to deploy their code, onboarding, and rapid scaling will create problems.&lt;br&gt;
Maintainability and traceability: While IaC provides a great way for tracking changes to infrastructure and monitoring things such as infra drift, maintaining your IaC setup tends to itself become an issue after a certain scale (approx. over 100 developers in our experience). When IaC is used extensively throughout an organization with multiple teams, traceability and versioning of the configurations are not as straightforward as they initially seem.&lt;br&gt;
RBAC: Building on that, Access Management quickly becomes challenging too. Setting roles and permissions across the different parts of your organization that suddenly have access to scripts to easily spin up clusters and environments can prove quite demanding.&lt;br&gt;
Feature lag: Vendor agnostic IaC tooling (e.g. Terraform) often lags behind vendor feature release. This is due to the fact that tool vendors need to update providers to fully cover the new cloud features being released at an ever growing rate. The impact of this is sometimes you cannot leverage a new cloud feature unless you 1. extend functionality yourself 2. wait for the vendor to provide coverage or 3. introduce new dependencies.&lt;br&gt;
Once again, these are not the only drawbacks of rolling out IaC across your company but are some of the more acute pain points we witness when talking to engineering teams.&lt;/p&gt;

&lt;p&gt;The future&lt;br&gt;
As mentioned, the IaC market is in a state of constant evolution and new solutions to these challenges are being experimented with already. As an example, Open Policy Agents (OPAs) at present provide a good answer to the lack of a defined RBAC model in Terraform and are default in Pulumi.&lt;/p&gt;

&lt;p&gt;The biggest question though remains the need for everyone in the engineering organization to understand IaC (language, concepts, etc.) to fully operationalize the approach. In the words of our CTO Chris Stephenson “If you don’t understand how it works, IaC is the biggest black box of them all”. This creates a mostly unsolved divide between Ops, who are trying to optimize their setup as much as possible, and developers, who are often afraid of touching IaC scripts for fear of messing something up. This leads to all sorts of frustrations and waiting times.&lt;/p&gt;

&lt;p&gt;There are two main routes that engineering team currently take to address this gap:&lt;/p&gt;

&lt;p&gt;Everyone executes IaC on a case by case basis. A developer needs a new DB and executes the correct Terraform. This approach works if everybody is familiar with IaC in detail. Otherwise you execute and pray that nothing goes wrong. Which works, sometimes.&lt;br&gt;
Alternatively, the execution of the IaC setup is baked into a pipeline. As part of the CD flow. the infrastructure will be fired up by the respective pipeline. This approach has the upside that it conveniently happens in the background, without the need to manually intervene from deploy to deploy. The downside however is that these pipeline-based approaches are hard to maintain and govern. You can see the most ugly Jenkins beasts evolving over time. It’s also not particularly dynamic, as the resources are bound to the specifics of the pipeline. If you just need a plain DB, you’ll need a dedicated pipeline. &lt;br&gt;
Neither of these approaches really solves for the gap between Ops and devs. Both are still shaky or inflexible. Looking ahead, Internal Developer Platforms (IDPs) can bridge this divide and provide an additional layer between developers and IaC scripts. By allowing Ops to set clear rules and golden paths for the rest of the engineering team, IDPs enable developers to conveniently self-serve infrastructure through a UI or CLI, which is provisioned under the hood by IaC scripts. Developers only need to worry about what resources (DB, DNS, storage) they need to deploy and run their applications, while the IDP takes care of calling IaC scripts through dedicated drivers to serve the desired infrastructure back to the engineers.&lt;/p&gt;

&lt;p&gt;We believe IDPs are the next logical step in the evolution of Infrastructure as Code. Humanitec is a framework to build your own Internal Developer Platform. We are soon publishing a library of open-source drivers that every team can use to automate their IaC setup, stay tuned to find out more at &lt;a href="https://github.com/Humanitec"&gt;https://github.com/Humanitec&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Spotify Backstage: Service Catalogs Explained
</title>
      <dc:creator>Humanitec - Your Internal Developer Platform</dc:creator>
      <pubDate>Wed, 16 Jun 2021 13:53:25 +0000</pubDate>
      <link>https://dev.to/humanitec_com/spotify-backstage-service-catalogs-explained-52ni</link>
      <guid>https://dev.to/humanitec_com/spotify-backstage-service-catalogs-explained-52ni</guid>
      <description>&lt;p&gt;We talk to hundreds of engineering teams and organizations of all sizes every month. Lately, service catalogs have been coming up in conversations more and more, especially when we speak with mid or large size enterprise accounts. If you too work in a large dev organization (&amp;gt;300 developers), this probably comes as no surprise.&lt;/p&gt;

&lt;p&gt;With a growing number of tools requested by different development teams and an ever-expanding base of services, big enterprise setups are characterized by an increasing lack of transparency and visibility. It’s consistently becoming harder to have a full picture of what service is running on which infrastructure, who operates it or who owns it. It’s also extremely difficult to map out similar if not identical services to avoid duplication and prevent engineers from reinventing the wheel over and over again across multiple teams.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;Webinar: How to make your developers self-serving with an Internal Developer Platform&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;Service catalogs like Spotify’s Backstage are establishing themselves as the best answer to these issues. By making services and their metadata easy to understand and reuse throughout the entire organization, service catalogs bring back a level of transparency and observability that most enterprise teams have long dreamed of regaining. In this blog post, we’ll discuss what these service catalogs are and how they can help your team. We’ll also look at how top performing engineering organizations combine service catalog functionality with Internal Developer Platforms (IDPs) to provide their engineers with an end-to-end development and deployment experience of the highest quality.&lt;/p&gt;

&lt;p&gt;What is a service catalog&lt;br&gt;
First of all, it’s worth clarifying what we mean exactly when we are talking about a service catalog. In the DevOps and software infrastructure realm there are a few examples of similar yet different service catalogs:&lt;/p&gt;

&lt;p&gt;In the context of global hyperscalers like GCP and AWS, a service catalog represents the sum of all services that are available in the respective consoles a.k.a. the overwhelming amount of options you are presented with every time you open your console.&lt;br&gt;
In the Kubernetes universe there is an extension API called Service Catalog, which can be used to integrate managed services from service brokers.&lt;br&gt;
The new kid on the block: Backstage.io, an open-source project by Spotify that allows organizations to establish their own service catalog.&lt;br&gt;
For the purpose of this article, we’ll discuss service catalogs like Spotify Backstage, which enable enterprise teams to create an organized and curated collection of all business and information technology services and applications within an enterprise.&lt;/p&gt;

&lt;p&gt;We define a service catalog as a means of centralizing all services that are important to the stakeholders of an organization that implements and uses it. Given its digital implementation, the service catalog acts, at a minimum, as a digital registry and a means for highly distributed enterprises to see, find, invoke, and execute services regardless of where they exist in the company. Crucially, this means that people in one part of the world can find and utilize the same services that people in other teams use on the other side of the world/enterprise, eliminating the need to develop and support local services.&lt;/p&gt;

&lt;p&gt;Zooming in, every service catalog should have some version of these four core elements.&lt;/p&gt;

&lt;p&gt;Ownership information and other metadata&lt;br&gt;
A good service catalog contains a range of information about each service in the enterprise. This includes information such as ownership (typically pointing to a specific individual or team), programming language, source code, current version, last update, documentation. Depending on the company, additional information may be essential. This view is especially interesting for the developer or the product manager. It allows anyone in the enterprise to find out very quickly whether a certain required service is already available to then coordinate directly with the respective responsible team.&lt;/p&gt;

&lt;p&gt;Service templating&lt;br&gt;
Ops teams also use service catalogs as a way to define templates and blueprints for the rest of the engineering organization to use. This allows developers to get coding right away, using a predefined service design and language framework like Golang, Node.js, etc.&lt;/p&gt;

&lt;p&gt;Service usage&lt;br&gt;
A service catalog answers the question around which service (or fork of it) is consumed by which applications. This view is especially interesting for the team owning said service, as it makes it easy to learn about any missing functionalities or potential new features.&lt;/p&gt;

&lt;p&gt;Service versioning&lt;br&gt;
Finally, the service catalog allows Ops teams to know at a glance which versions of a particular service are used by which applications and in which environments. This is specifically useful in the event vulnerabilities are found in a given service version, as teams can be warned and only the affected environments or apps can be shut down/rolled back.&lt;/p&gt;

&lt;p&gt;Spotify Backstage&lt;br&gt;
In March 2020 Spotify announced they were releasing an open source version of their own internal service catalog, called Backstage, used by over 280 engineering teams to manage 2,000+ backend services, 300+ websites, 4,000+ data pipelines, and 200+ mobile features.&lt;/p&gt;

&lt;p&gt;Backstage gives teams a very straightforward method to unify all of your infrastructure tooling, services, and documentation under a single, easy-to-use interface. Built around the concept of metadata YAML files, Backstage makes it easy for a single team to manage tens of services and allows a company to easily manage thousands of them. Because the system is practically self-organizing, it requires considerably less oversight from a centralized Platform team than a normal catalog would. Developers can get a uniform overview of all their software and related resources (such as server utilisation, data pipelines, pull request status), regardless of how and where they are running, as well as an easy way to onboard and manage those resources.&lt;/p&gt;

&lt;p&gt;Spotify actually said they reduced onboarding time by more than 50% since introducing Backstage internally. It is no wonder then that ever since the open source announcement, Backstage has quickly become the go-to framework for most enterprises looking to build a service catalog.&lt;/p&gt;

&lt;p&gt;Use cases range from making documentation easier to create and consume by allowing for Markdown files alongside the actual code, all the way to better cloud cost control through enhanced visibility into each developer and team’s resource usage. Any engineer in the organization can now easily search all existing services through Backstage, consume what they need or spin up a new service with a predefined architecture and design, using the 10s of available plugins to document it, track its resource consumption and overall health or identify its dependencies.&lt;/p&gt;

&lt;p&gt;Service catalogs and Internal Developer Platforms&lt;br&gt;
Service catalogs and Backstage in particular provide enterprise teams with an incredibly useful pane of glass on top of their apps and services. At Humanitec, we often get asked how this functionality compares to that of Internal Developer Platforms (IDPs). Although some people seem to think they are mutually exclusive, IDPs and service catalogs (or Humanitec and Backstage) actually complement one another quite well.&lt;/p&gt;

&lt;p&gt;A service catalog like Backstage allows you to easily search all your services and immediately create a new one if what you are looking for is not available. The new service comes with a predefined design and set of metadata, depending on the specifics of your Ops or Platform team. You can get going with the coding right away, fantastic!&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;New call-to-action&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;What it does not allow you to do however, is running your service. The service doesn’t come with dependencies to DBs, routing, storage, secrets and everything else you need to actually deploy a set of services or applications to your infrastructure. That’s where IDPs come in.&lt;/p&gt;

&lt;p&gt;With an IDP, Ops teams can wire up their whole setup and orchestrate their infrastructure from one control pane. Humanitec lets them create baseline configurations and golden paths, so developers can interact independently and effortlessly with the underlying infrastructure. Developers can self-serve any tech they need, like DBs, ingress, file storage and all other dependencies their apps require to run. They can also manage their own deployments, doing roll-backs and diffs, versioning configurations the same way they do with code in Git.&lt;/p&gt;

&lt;p&gt;Combining Backstage for service discovery with Humanitec for infrastructure orchestration, deployment and dependency management, teams can achieve a new degree of Ops automation and developer self-service on all levels. Engineers can now not only one-click create a new service with all required metadata attached to it, but also one-click deploy it to a new environment, provisioned with the resources they need. And all in a context where a central Platform team can set predefined rules and golden paths for all other app development teams to operate within.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Recap DevOps Enterprise Summit 2020</title>
      <dc:creator>Humanitec - Your Internal Developer Platform</dc:creator>
      <pubDate>Fri, 03 Jul 2020 12:07:29 +0000</pubDate>
      <link>https://dev.to/humanitec_com/recap-devops-enterprise-summit-2020-2mdb</link>
      <guid>https://dev.to/humanitec_com/recap-devops-enterprise-summit-2020-2mdb</guid>
      <description>&lt;p&gt;We're in a time where our meetups, conferences, and hackathons have moved virtual, and while we may not get the same experience, we still have the opportunity for learning and networking. I recently attended the &lt;a href="https://events.itrevolution.com/virtual/"&gt;DevOps Enterprise Summit&lt;/a&gt;, a three-day mighty behemoth of presentations, interviews, and discussion. Presenters shared real-world problems and how DevOps provides a framework to problem solve, change company culture, and ultimately drive forward customer satisfaction and financial benefits for all concerned.&lt;/p&gt;

&lt;p&gt;Sure, there were plenty of deep dives into critiquing the theoretical foundations of DevOps - and the linguistics and semantics of DevOps were heavily discussed, as speaking spent time defining theories and frameworks. But if you're the kind of person that likes to focus on meaningful outcomes and actually see how things work (or don't work) like me, an enterprise conference is an excellent opportunity to hear people talking about their own experiences in their own company, sharing case studies and more.&lt;/p&gt;

&lt;p&gt;Transform a high-context environment into a low-context environment.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://virtualdevopsenterprisesumm.sched.com/speaker/tlimoncelli"&gt;Thomas Limoncelli&lt;/a&gt; is SRE Manager at Stack Overflow. He gave an interesting talk about the differences between high and low culture and the value of documentation. He shared a story from his first week at Stack Overflow:&lt;/p&gt;

&lt;p&gt;"I still remember my training as I asked how to create a virtual machine. We use a product called VMware, and my mentor walked me through the process. It involved five very complicated steps; it wasn't written down, just verbally passed on from one system into the other. I asked how anyone could memories this? The response was 'Well, we just kind of expected that anyone who would get through our interview process would just know how to do this kind of stuff.' I remembered thinking how could I be expected to know all of that?&lt;/p&gt;

&lt;p&gt;The day involved a call from the boss. Thomas and his mentor had made a mistake in their work, demonstrating that even an experienced staff member was unable to memorize everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a high context culture?
&lt;/h3&gt;

&lt;p&gt;A high context culture is one where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Communication is informal, less documented, and involves collective history.  &lt;/li&gt;
&lt;li&gt;  People have to read between the lines to understand what's going on.&lt;/li&gt;
&lt;li&gt;  More assumed knowledge.&lt;/li&gt;
&lt;li&gt;  It relies on long term traditions and practices such as family gatherings where people know how to do things and what to expect.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is a low context culture?
&lt;/h3&gt;

&lt;p&gt;In a low context culture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Communication is explicit; you are told the rules, knowledge tends to be codified, public, external, and accessible.&lt;/li&gt;
&lt;li&gt;  There are more interpersonal connections of shorter duration.&lt;/li&gt;
&lt;li&gt;  Knowledge is often more transferrable.&lt;/li&gt;
&lt;li&gt;  Examples of a low context culture include airports and sports with established rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AZeYI6zp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efee91e23ddd82752169a95_EEWBT5W-MKDTOLFM5wf1uu1yJXe15w89eVzTdDtwWCxNjwNPRKBt1lLIkvULOf0hDZhVg8OzfybCrLA5Clf55RqJpNz8dmqW3GAj8k583GaaewAgqBZp9rDurAhvFx7TqC_-BhpI.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AZeYI6zp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efee91e23ddd82752169a95_EEWBT5W-MKDTOLFM5wf1uu1yJXe15w89eVzTdDtwWCxNjwNPRKBt1lLIkvULOf0hDZhVg8OzfybCrLA5Clf55RqJpNz8dmqW3GAj8k583GaaewAgqBZp9rDurAhvFx7TqC_-BhpI.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h3&gt;
  
  
  The need for Low context DevOps 
&lt;/h3&gt;

&lt;p&gt;According to Thomas, the DevOps environment should strive to be low context "you should spend more time working and less time frustrated with roadblocks and information gaps."&lt;/p&gt;

&lt;h3&gt;
  
  
  Three ways to reduce the required context of your DevOps environment:
&lt;/h3&gt;

&lt;p&gt;Carefully constructed defaults - The defaults should be the way you expect most people to work. Typically new employees don't have the software and access and permissions they need to do their job. But new employees can't fix it. If you are changing projects all the time, you might be faced with this problem regularly, and dynamic companies should involve moving from projects. Thus, employee-friendly default keeps employees happy and workplaces functional. &lt;/p&gt;

&lt;p&gt;Make right easy - Thomas notes that most websites run on OpenSSL. "But settings become stale, and it effectively requires a Ph.D. to use." Comparatively, LibreSSL makes the default 'timelessly correct.'&lt;/p&gt;

&lt;p&gt;Stack Overflow embodies this sentiment through tools and infrastructure to help provide a low context environment: e.g.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Ticket system&lt;/li&gt;
&lt;li&gt;  Bug tracking system&lt;/li&gt;
&lt;li&gt;  Monitoring/observability&lt;/li&gt;
&lt;li&gt;  CI/CD pipeline system&lt;/li&gt;
&lt;li&gt;  Container/artifact repository&lt;/li&gt;
&lt;li&gt;  Documentation repository&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ubiquitous documentation - Documenting as you work means you'll have documentation when you need it like when you're fixing an error as part of 3 am pager duty. Documentation is easy to do with a deep link/URL can include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  In error messages&lt;/li&gt;
&lt;li&gt;  In CI/CD control panel restrictions&lt;/li&gt;
&lt;li&gt;  In alert messages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thomas advises the mindset of "my code is the documentation" and suggests instead that people need to "record tech debt or it won't be fixed."&lt;/p&gt;

&lt;p&gt;For those that historically hate documentation, he suggests the incentive, "the better you document, the more relaxed you can be later. It also means someone else can do my work, and I can go onto more interesting projects."&lt;/p&gt;

&lt;p&gt;For those who hate documentation, he suggests templates that do much of the work and that teams include documentation updates in work estimates - "don't think of documentation as something extra but part of the project itself." He also notes that there's no need to reinvent the wheel, and you can find where engineers already write and repurpose - such as email, chatrooms, IM, Stack Overflow, and repurpose these into your own work.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resources:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://www.amazon.de/Unicorn-Project-Developers-Disruption-Thriving/dp/1942788762"&gt;The unicorn project&lt;/a&gt; by Gene Kim.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://stackoverflow.com/teams"&gt;Stackoverflow for teams&lt;/a&gt; -gives new employees the power to fix things and build a good reputation. It works across team silos. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demystifying DevOps and SRE
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://virtualdevopsenterprisesumm.sched.com/speaker/daniel_maher.214q1xwi"&gt;Daniel Maher&lt;/a&gt; works in Developer Relations at Datadog. He took a look at SRE including common terminology, practical examples and the relationship between site reliability engineers (SRE) and DevOps.&lt;/p&gt;

&lt;p&gt;He describes "DevOps as a professional and cultural movement that focuses on openness sharing and mutual respect. It seeks to improve the quality of life for its adherence practitioners, for their company, customers, and those participating." However, improving the quality of life involves availability and reliability, which is where SREs come in. "How can we ensure that the systems that we have in place will be there when people need them?" In other words, "DevOps is an idea, SRE is a practice."&lt;/p&gt;

&lt;p&gt;Daniel describes &lt;a href="https://landing.google.com/sre/books/"&gt;Site Reliability Engineering: How Google runs production systems&lt;/a&gt; as "The big lizard in the room. It's just one interpretation - albeit hugely influential - and how Google did something in 2016 is not necessarily how you should do something today." Instead, Daniel spoke about the importance of finding out what is needed and works best for your own organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Teams and organizational structure 
&lt;/h3&gt;

&lt;p&gt;Pertinent to SRE and DevOps, Daniel suggests we can organize people in product teams, squads, and guilds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Product teams are one of the ways that DevOps scale to large enterprise companies. But they are only one part of an organization. &lt;/li&gt;
&lt;li&gt;  Assembling a squad to focus on a particular product or problem is a great way to use a product management structure across teams. Examples at Data dogs of squad work include eg recruiting, building coding tests, and hackathons. Squads are typically short term, defined, and have a beginning and end.&lt;/li&gt;
&lt;li&gt;  A guild owns and shepherds an important part of an organization such as organizational culture, standards around automation, and traditionally involves lots of different stakeholders.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Teams of SREs can undertake a range of tasks, including code reviews, incident reports, and facilitate post-mortems. They may focus on a dedicated portfolio or product team, and individual SREs may rotate in and out of projects/sprints or not.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tips on finding and growing SRE talent 
&lt;/h3&gt;

&lt;p&gt;According to Daniel, SREs have strong personalities with specific attributes, which might include: a wide range of technical interests, patience for staring at code, an enjoyment of problem-solving, and interest in mentoring/teaching. There is no formal SRE qualification, so the desire for self-learning is super important.&lt;/p&gt;

&lt;p&gt;He stresses that "great SRE talent can come from anywhere", especially as it is not limited to particular qualifications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical suggestions and pitfalls of SRE
&lt;/h3&gt;

&lt;p&gt;The configuration of an SRE depends on the specifics of your organization. It may require some testing and tinkering. "The number one thing to avoid is dogma - don't look at how another org has implemented things as the only way to do it." Instead, attend conferences and meetups, read blog posts, and talk to others to see how it could work.&lt;/p&gt;

&lt;p&gt;"No one can sell you DevOps, it's a journey and a process with no end, and you should embrace that."&lt;/p&gt;

&lt;h2&gt;
  
  
  Team Topologies in Action
&lt;/h2&gt;

&lt;p&gt;Since the book Team Topologies was published in 2019, organizations around the world have started to adopt Team Topologies principles and practices like Stream-aligned teams, modern platforms, well-defined team interactions, and team cognitive load as a key driver for fast software delivery and operations. Authors Manuel Pais and Matthew Skelton took us through some recent case examples.&lt;/p&gt;

&lt;h3&gt;
  
  
  4 team types
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Stream aligned team - AKA a product team - they are aligned to a stream of work that is not necessarily a product. They are among the core types of teams that deliver value to the customers or the users. &lt;/li&gt;
&lt;li&gt;  Enabling team - Typically, teams of experts in a specific area will collaborate with stream-aligned teams to help them gain the capabilities they are missing. &lt;/li&gt;
&lt;li&gt;  Complicated sub-system team - involve sub-systems that need such deep skills, high-level expertise, and an understanding of niche technology. &lt;/li&gt;
&lt;li&gt;  Platform team - platform teams provide services that make the lives of stream-aligned teams easier. They might provide infrastructure on top of which the stream-aligned teams run their products, or build a self-service system that the stream-aligned teams use to access build and test environments on demand.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Team interaction Models
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fAr4sEmp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efee91eed0feecff6f08125_w_Q-EKWOwE6CMU3b7S9Wf_Sj53Tk0S_mG0n_DQQqSTt_nR2qUymZlxiqCy3anW4ntSCrMd9T4X1_etJ1c-IO2hOfgVpL4gCud1CT-fWpy3CrQ240VzmTqDZWHBdmxQjvfks-tgA9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fAr4sEmp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efee91eed0feecff6f08125_w_Q-EKWOwE6CMU3b7S9Wf_Sj53Tk0S_mG0n_DQQqSTt_nR2qUymZlxiqCy3anW4ntSCrMd9T4X1_etJ1c-IO2hOfgVpL4gCud1CT-fWpy3CrQ240VzmTqDZWHBdmxQjvfks-tgA9.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source: &lt;a href="https://teamtopologies.com/key-concepts"&gt;teamtopologies.com&lt;/a&gt;\&lt;br&gt;
According to Team Topologies, there are only three ways in which a team should interact:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TtT5W8rd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efee91ef95f1b0fbbd41038_lZcUghaM_DS-hc0B2SFPBaQK_5wxqqebbFZbpfl6dZk7kXwRkPj6llPLbPnGYAxErLaDW58Z5atP91HMM75Y_BjPrrd5NLVmHqdTFElaeh1FdJB9jSFlgv6l8mnAoVZ7xb8_b2D3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TtT5W8rd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efee91ef95f1b0fbbd41038_lZcUghaM_DS-hc0B2SFPBaQK_5wxqqebbFZbpfl6dZk7kXwRkPj6llPLbPnGYAxErLaDW58Z5atP91HMM75Y_BjPrrd5NLVmHqdTFElaeh1FdJB9jSFlgv6l8mnAoVZ7xb8_b2D3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source: &lt;a href="https://teamtopologies.com/key-concepts"&gt;teamtopolgies.com&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Collaboration: working together for a defined period to discover new things (APIs, practices, technologies, etc.)&lt;/li&gt;
&lt;li&gt;  X-as-a-Service: one team provides and one team consumes something "as a Service"&lt;/li&gt;
&lt;li&gt;  Facilitation: one team helps and mentors another team&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Case study: Gjensidige
&lt;/h3&gt;

&lt;p&gt;Gjensidige Insurance is a leading Nordic insurance company with 4000 employees and businesses in the Nordic and Baltic countries. It uses the four fundamental team types to clarify team responsibilities and interactions and is moving towards several "thinnest viable platforms" with Stream-aligned teams as internal customers.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o3_vIqbu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efee91f73a14e1f823f3a91_Dgam_gs3xBcPjISdi4NZ-bMACX69aVpv2xEfPqDgyN4kdOZqcRWzQmrjyg41PrKXuV45IAAwBsfsiAPwWFSGaqcs-9xIOXOa-0D18J3V2-O_Zz42kn3Pfz9gCxGo4u-59KBLz5mx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o3_vIqbu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efee91f73a14e1f823f3a91_Dgam_gs3xBcPjISdi4NZ-bMACX69aVpv2xEfPqDgyN4kdOZqcRWzQmrjyg41PrKXuV45IAAwBsfsiAPwWFSGaqcs-9xIOXOa-0D18J3V2-O_Zz42kn3Pfz9gCxGo4u-59KBLz5mx.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://www.dropbox.com/sh/oamszbiycv9xiyj/AAAarIDkYdltZuHDIa3H3dd2a/Day%202%20-%20June%2024/Breakouts?dl=0&amp;amp;preview=Manuel+Pais+-+2020-06-17+21-40-25+-+2020-06-24+-+Matthew+Skelton+%26+Manuel+Pais+-+Team+Topologies+in+Action+-+early+results+from+industry+-+DOES+London+2020.pdf&amp;amp;subfolder_nav_tracking=1"&gt;Team Topologies in Action&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;Positive outcomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  40% annual growth in digital sales over the last 5 years &lt;/li&gt;
&lt;li&gt;  More than 100% growth in digital customer service, shifting transactions from call centers to online  &lt;/li&gt;
&lt;li&gt;  Claims handling is heavily digitized -- more than 80% of claims are now filed online, of which up to 40% of are automatically handled.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Case study: Puregym
&lt;/h3&gt;

&lt;p&gt;PureGym is Britain's largest gym chain - the first to gain over 1 million members. As PureGym expanded, so did the need for software to enable their members to book and manage gym sessions. Since 2019, PureGym has re-aligned its teams and team interactions based on Team Topologies patterns, helping to scale the engineering teams and improve flow.&lt;/p&gt;

&lt;p&gt;Matthew explains that the company was experiencing a range of problems as the team rapidly expanded that led to their realignments, such as pain points in inter-team communication, monolith software, and a single code repository. They worked to reshape teams (over several different configurations), breaking up tasks and responsibilities.&lt;/p&gt;

&lt;p&gt;Their success was made possible through continuous collaboration and facilitation across teams and tasks. The results included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  More business responsive &lt;/li&gt;
&lt;li&gt;  Balanced ownership of services&lt;/li&gt;
&lt;li&gt;  Improved team morale&lt;/li&gt;
&lt;li&gt;  Better long term architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Resources
&lt;/h4&gt;

&lt;p&gt;The authors of Team Topologies are currently working on a free workbook for remote teams, and also have plenty of resources available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://github.com/teamtopologies"&gt;GitHub.com/teamtopoloiges&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://teamtopologies.com/key-concepts-content/remote-first-team-interactions-with-team-topologies"&gt;Teamtopologies.com/remote-first&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevOps Enterprise Summit was a great opportunity to explore how businesses across multiple industries are actively utilizing &lt;a href="http://humanitec.com/devops"&gt;DevOps&lt;/a&gt; to facilitate internal transformation and improve customer service and other business outcomes. The opportunity to hear speakers not simply spruiking a product but also talking openly about the challenges and failures associated with business transformation - and how they were able to over come them - provides a great opportunity to learn lessons and strategies to help apply DevOps to your own workplace.&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>Continuous Integration (CI) vs. Continuous Delivery (CD) vs. Continuous Deployment (CD)</title>
      <dc:creator>Humanitec - Your Internal Developer Platform</dc:creator>
      <pubDate>Thu, 02 Jul 2020 08:11:09 +0000</pubDate>
      <link>https://dev.to/humanitec_com/continuous-integration-ci-vs-continuous-delivery-cd-vs-continuous-deployment-cd-1e61</link>
      <guid>https://dev.to/humanitec_com/continuous-integration-ci-vs-continuous-delivery-cd-vs-continuous-deployment-cd-1e61</guid>
      <description>&lt;p&gt;Author: &lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__40784"&gt;
  
    .ltag__user__id__40784 .follow-action-button {
      background-color: #808185 !important;
      color: #ffffff !important;
      border-color: #808185 !important;
    }
  
    &lt;a href="/chrischinchilla" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Yaoid2Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--8LhjQpvB--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/40784/b95f098f-185d-4ae2-beb7-b626909276b8.jpeg" alt="chrischinchilla image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/chrischinchilla"&gt;Chris Chinchilla&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/chrischinchilla"&gt;Freelance technical communicator to the stars. Podcaster, video maker, writer of interactive fiction &amp;amp; games. Sometimes I publish posts I have been paid for, but I never endorse products I do not like&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To begin down the continuous path, aside from a willingness to adapt and try new practices, you need a version control system (VCS) for any other system to access your codebase, and you need solid test coverage. We won't cover these in detail in this post, but you can find many tutorials online for getting them both set up with your team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Integration
&lt;/h2&gt;

&lt;p&gt;Most teams doing "continuous anything" are using some form of Continuous Integration (CI). Exact implementation details vary from team to team but typically involve building a version of your codebase and deploying it to a testing environment and running specific tests against it. You may decide to run specific tests on all code pushes, across all branches, or only specific tests on specific branches. What constitutes a "test," and a pass or fail depends on your application and strategy. In short, instead of relying on team members to run them manually, a CI tool runs them automatically.&lt;/p&gt;

&lt;p&gt;After a reasonable amount of initial setup, what are the benefits to your team of CI? Primarily, it encourages your team to test their code extensively and solidly. They should be already, but introducing CI forces them to. More tests should equal more stable code and fewer bugs. Automated testing also reduces the amount of context switching developers make between tasks and the amount of time they spend waiting for code to build. Your testing and QA teams can spend less time running repetitive tests, and instead focus on more significant improvements to code and applications.&lt;/p&gt;

&lt;p&gt;Technical teams are not renowned for their skills at communication, and automated processes mean they can reduce those pain points through a streamlined process flow instead of waiting for responses to internal tickets, or missing requests for review.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI Implementation Steps
&lt;/h3&gt;

&lt;p&gt;The first steps to getting CI running with your application is automating your build steps, and writing tests. While tests are something that every development team knows they should have, not all do, and maybe not to the extent that makes a CI process useful and reliable. The tests need to check every new feature, improvement or bug fix you add, and the effect they have on the application as a whole.&lt;/p&gt;

&lt;p&gt;Your development team should push and merge changes as regularly as possible, and in as many small discrete chunks as possible. This is one of the harder steps to get right, as you can't split everything up this way, but over time teams start to think and plan in more appropriate ways.&lt;/p&gt;

&lt;p&gt;You need a CI service or server to monitor for changes to the branches you specify and to build and run tests. There are many options, but below are some of the most popular. Deciding which suits you best is a difficult decision, as most platforms share similar features and the decision is more of a pricing and approach question:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://jenkins.io/"&gt;Jenkins&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://travis-ci.org/"&gt;Travis CI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://circleci.com/"&gt;Circle CI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.jetbrains.com/teamcity/"&gt;Team City&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.atlassian.com/software/bamboo"&gt;Bamboo&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.gitlab.com/ee/ci/"&gt;GitLab CI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://semaphoreci.com/"&gt;Semaphore CI&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Continuous Delivery
&lt;/h2&gt;

&lt;p&gt;Continuous delivery (CD) extends CI. If your tests pass in CI, then during the CD step, you take the built artifacts from the CI step and deploy them automatically to other development environments (staging, for example), and manually to production when you're ready. Depending on your application and the programming language(s) it uses, delivery might be a short and straightforward process or a long and complex one. A popular concept in CD is "reproducible builds," which are builds that result in identical output artifacts. Identical builds give increased confidence that code that works on a developer machine, also works on production. However, we all know that developers rarely all use the same setup as each other or as machines that run production code. This is where containerization comes into play. Using containers such as &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; or &lt;a href="https://coreos.com/rkt/"&gt;Rkt&lt;/a&gt;, you can replicate the production environment on any machine and reduce the chances of differences in environments that used to be common sources of problems.&lt;/p&gt;

&lt;p&gt;As with other continuous practices, the main advantages for automating delivery is speed, and saving your team time wasted on repetitive tasks by passing them off to an automated system, that happily repeats itself consistently as much as you ask it.&lt;/p&gt;

&lt;h3&gt;
  
  
  CD Implementation Steps
&lt;/h3&gt;

&lt;p&gt;Before you start down the CD path, you should already have a solid setup and experience with CI. The difference between continuous delivery and deployment (covered next) is a little confusing. With CD (continuous delivery), you should automate the delivery of your deployments to production, and require no manual interaction, but triggering the deployment is still manual when you adopt just CI and CD (continuous delivery).&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5sOvlcMM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5eecbfdae75ba6808aba88fd_continuous-integration-vs-continuous-delivery-vs-continuous-deployment-humanitec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5sOvlcMM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5eecbfdae75ba6808aba88fd_continuous-integration-vs-continuous-delivery-vs-continuous-deployment-humanitec.png" alt="Continuous Integration (CI), Delivery (CD), Deployment (CD): What's the difference?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CI vs. CD vs. CD&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;You need a service that can handle CD and CI. There are many options, and specifically, what you use can depend on the programming language(s) you use. Still, the most popular can support a wide variety of build systems.&lt;/p&gt;

&lt;p&gt;Many teams that use CD find themselves adding "feature flags" to enable and disable features for particular user segments. This does introduce added complexity (and tests) into your code but also gives you the potential to toggle features without rebuilding and delivering code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Deployment
&lt;/h2&gt;

&lt;p&gt;Continuous deployment (CD, yes, this gets confusing) is an extension of delivery. In this step, you take the artifacts automatically deployed from CI and continuous delivery, and now also automatically deploy them to production. This could be anything from deploying to a website to releasing a multitude of binaries to different package managers and build systems. If you use CI + CD + CD, then this means that one commit that passes tests is deployed to production automatically, potentially within seconds.&lt;/p&gt;

&lt;p&gt;This step is the hardest to implement as it requires a large amount of trust in your processes up until this point, and that you can roll back deployments if problems arise. To put you at ease, out of all three of the practices mentioned in this article, it's the one that least people adopt, at least until they are very sure about their processes.&lt;/p&gt;

&lt;p&gt;The most significant benefit for CD is encouraging constant smaller, simpler releases. While it may sound counter-intuitive that releasing more often is more reliable, it also means you reduce the risk of large problematic releases that are hard to roll back. It also makes it easier to A/B test smaller features with particular users and get feedback on individual discrete features.&lt;/p&gt;

&lt;h3&gt;
  
  
  CD Implementation Steps
&lt;/h3&gt;

&lt;p&gt;CD systems again overlap with CI and CD (continuous delivery) systems, or possibly your deployment process is custom, or the default tools for your programming language make more sense. Deployment tools are also not new, and in many cases, you can use those pre-existing tools in a continuous way instead.&lt;/p&gt;

&lt;p&gt;You also need to factor in what else production deployments affect that need updating. For example, documentation, marketing copy, or support resources. You could factor some of these into your continuous processes, depending on how you create the resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continually Improve
&lt;/h2&gt;

&lt;p&gt;There are a lot of decisions and preparations to make before becoming a fully continuous delivery team. Still, thankfully it's a process you can iteratively add to as you gain time and experience. Our advice is to start with ensuring you have a comprehensive test suite, add that to a CI system, and add steps from there.&lt;/p&gt;

&lt;p&gt;Humanitec helps you on your continuous development path. All you need is the first stage, an existing CI pipeline. We take that pipeline and deploy your build artifacts to any of the development environments you spin up on demand. You can host these environments with us, with AWS, or GCP. With a few clicks, your code is live and running on your Kubernetes cluster. &lt;a href="https://humanitec.com/webinars"&gt;Our DevOps experts are happy to support you during a free webinar.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>How to manage environment variables?</title>
      <dc:creator>Humanitec - Your Internal Developer Platform</dc:creator>
      <pubDate>Wed, 24 Jun 2020 13:44:27 +0000</pubDate>
      <link>https://dev.to/humanitec_com/how-to-manage-environment-variables-282l</link>
      <guid>https://dev.to/humanitec_com/how-to-manage-environment-variables-282l</guid>
      <description>&lt;p&gt;The last two articles in our series dealt with the &lt;a href="https://humanitec.com/blog/environment-configs-kubernetes"&gt;potential of environment variables &lt;/a&gt;and some &lt;a href="https://humanitec.com/blog/handling-environment-variables-with-kubernetes"&gt;hands-on examples&lt;/a&gt;. In this article, we talk with DevOps Engineer &lt;a href="https://www.linkedin.com/in/antoinerougeot/"&gt;Antione Rougeot&lt;/a&gt; about the challenges of managing environment variables and he shares some best practices from his experience.&lt;/p&gt;

&lt;p&gt;‍&lt;strong&gt;Humanitec: Before we talk about environment variables, perhaps you could introduce yourself briefly and tell us something about your background.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Antione Rougeot: I help enterprises close the gap between developers and products. Fascinated by computers, I worked as a developer and software engineer for 6 years and evolved to be a DevOps engineer. I started working as a freelancer last year helping multiple clients dockerize their applications. Thanks to this, their applications become independent from the servers they are deployed to, and they are free to run on any Docker-compliant infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why do you typically deal with environment variables?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A key concept I followed while coding applications as a Software Engineer was to make classes loosely coupled. This means having an application composed of many independent pieces of code, that ensures that your code has a certain level of cleanness and maintainability. Then, the idea is to instantiate these classes by passing parameters values and ensure they work together in a highly cohesive way. The same concept is applied to software infrastructure. We create many independent Docker containers, that we connect together using environment variables. This is also known as microservices architecture.&lt;/p&gt;

&lt;p&gt;To give a concrete example, here is the definition of two modules connected using an environment variable. This definition is made for docker-compose, a tool used on a development machine to start containers and test that they work well together.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# docker-compose.yml
version: '3'

services:
  frontend:
    build: ./frontend
    ports:
      - 80
    depends_on:
      - backend
    environment:
      BACKEND_ENDPOINT: localhost:2000

  backend:
    build: ./backend
    expose:
      - 2000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the code is pretty straightforward, the frontend and backend are built using Dockerfiles present in their respective paths. Both modules are accessible on the network using the specified ports. The frontend should be started after the backend. The frontend communicates with the backend using the endpoint value &lt;code&gt;BACKEND_ENDPOINT&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;What is great with this setup, is that you don't need to rebuild the frontend module to start pointing to a new backend endpoint value.&lt;/p&gt;

&lt;p&gt;When this setup is deployed to production, the only change made is changing &lt;code&gt;BACKEND_ENDPOINT&lt;/code&gt; value from &lt;code&gt;localhost:2000&lt;/code&gt; to its domain name, like &lt;a href="https://backend.endpoint.domain.org./"&gt;&lt;code&gt;https://backend.endpoint.domain.org&lt;/code&gt;.&lt;/a&gt; Each module is now independent (loosely coupled) and also well connected using an environment variable (highly cohesive).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What difficulties can you encounter when setting up environment variables?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To quote Wikipedia: "DevOps is a set of practices that combines software development (Dev) and information-technology operations (Ops)". Environment variables are part of this "set of practices".&lt;/p&gt;

&lt;p&gt;What is challenging is to take an existing application and transform it in a way that is compatible with these practices, including setting parameters with environment variables. When you are building an application from scratch, if you don't keep in mind that it will run in a Docker container, adaptation will become more complicated since it impacts the whole application.&lt;/p&gt;

&lt;p&gt;Once you have successfully made these changes to the application, then you hit the next step, making the same setup work on developer's machines and in production.&lt;/p&gt;

&lt;p&gt;Imagine you have the following setup, which is very common.&lt;/p&gt;

&lt;p&gt;The application reads sensitive configuration data, like &lt;code&gt;api_key&lt;/code&gt; from a plain text file.&lt;/p&gt;

&lt;p&gt;This file is not included in the source control for security reasons, it's passed manually. When a new developer arrives, they ask for this file from colleagues so they can start coding and test the application.&lt;/p&gt;

&lt;p&gt;On production, the file is copy-pasted to the remote server and it stays there. The problem here is that by doing that, the production application is tightly coupled to the server it's running on. To improve things, you choose to move to a Docker-based setup. Good choice. :)&lt;/p&gt;

&lt;p&gt;After refactoring, your application doesn't read the value of "api_key" from the file anymore, but from the &lt;code&gt;API_KEY&lt;/code&gt; environment variable. At this point, you can deploy it to a Docker-compliant infrastructure. The value of &lt;code&gt;API_KEY&lt;/code&gt; is securely set in the platform you are using to spin up containers, and if you add a new stage it's present by default, which eliminates the need for copy-pasting something on a remote server and makes the deployment fully automated!&lt;/p&gt;

&lt;p&gt;The final step now is how to set &lt;code&gt;API_KEY&lt;/code&gt; on a developer's machines? There are multiple solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  you ask each developer to set the value in their environment before launching the application&lt;/li&gt;
&lt;li&gt;  you add some logic at the application's initialization to use the API key environment variable value if it exists, otherwise, fall back to the plain configuration file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Great! Everything is now working both on production and on developer's machines. All environments can run the same container, but with different parameters. ‍&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can you give a real-world example of a difficulty you encountered?‍&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you know, things don't always go as well as expected. The biggest challenge is when you realize that an external dependency was not designed to be compatible with environment variables. This happened to me recently when trying to dockerize a Ruby on Rails application that was running in a production environment with an engine called Passenger. This engine works well in the non-docker world when you define configurations in plain text files, but it turns out this engine isn't able to read environment variables by default.&lt;/p&gt;

&lt;p&gt;After investigation, I understood that the source of the problem is that this engine was a sub-process of Nginx, &lt;a href="https://nginx.org/en/docs/ngx_core_module.html#env"&gt;and as stated in the documentation&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;"By default, Nginx removes all environment variables inherited from its parent process".&lt;/p&gt;

&lt;p&gt;Of course, I'm not the first person to try to dockerize a Rails application running Passenger, so after further investigation, I saw it had added a directive " &lt;a href="https://www.phusionpassenger.com/library/config/nginx/reference/#passenger_app_env"&gt;passenger_app_env&lt;/a&gt;", that enables you to hard-code environment variables values. The value of the directive cannot be set dynamically, so I ended up using a hacky workaround where I transformed config files into templates and replaced values with the &lt;code&gt;envsubst&lt;/code&gt; tool.&lt;/p&gt;

&lt;p&gt;It was clearly time to reconsider the choice of using Passenger + Nginx to run the application.&lt;/p&gt;

&lt;p&gt;The possible solutions that I identified were the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  try the Apache + Passenger alternative,&lt;/li&gt;
&lt;li&gt;  try another engine like Unicorn, and&lt;/li&gt;
&lt;li&gt;  use the standard Puma server that was already used on developer's machines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The third solution made sense because it let us drop a layer of complexity, and also move toward another DevOps practice which is: keep development environments the same as production. What seems to be a quick change can sometimes turn out to be a complex task demanding reconsider components of the application's architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At which point do teams most often struggle when it comes to environment variables?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my early days as a developer, I always said: "I'm allergic to configuration".&lt;/p&gt;

&lt;p&gt;Software often has a large config folder or even multiple places where it's defined. Configuration is, in general, a dark place. Developers don't need to change it often, so nobody knows exactly what is inside, apart from the architect who already left the project.&lt;/p&gt;

&lt;p&gt;When you want to dockerize an application, you have to dig, identify and extract all values that are environment-specific. These changes often come with a fear of "breaking everything". Indeed, changing configuration is not like coding on the backend or frontend. In some situations, it's difficult to validate that what you just changed is correct.&lt;/p&gt;

&lt;p&gt;Something common with tasks such as refactoring configuration is that it has low priority. It's something that won't be visible at all on the product side. Teams are prioritizing new functionalities that are bringing noticeable results. Not taking care of configuration can lead to a total loss of control over it! Furthermore, it requires a high-level view of the system, which can be difficult to achieve. To sum up, multiple factors can create struggles when it comes to environment variables: fear of breaking, loss of control, and the need for a high-level view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What would be your top 3 tips on how to avoid these struggles?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, minimize the use of default values. If you identify a parameter that should be set as an environment variable, think twice before setting a default value.&lt;/p&gt;

&lt;p&gt;This can be dangerous and produce unexpected behavior, or worse: false positives.&lt;/p&gt;

&lt;p&gt;Example: you are using &lt;code&gt;BACKEND_ENDPOINT&lt;/code&gt; to tell your frontend how to communicate with the backend.&lt;/p&gt;

&lt;p&gt;You have 2 environments: development and production. For development, the value should be &lt;a href="https://dev.myapi.org%2C/"&gt;https://dev.myapi.org,&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and in production: &lt;a href="https://prod.myapi.org./"&gt;https://prod.myapi.org.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the initialization of your app you do something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if env_variable_is_set("BACKEND_ENDPOINT")
    backend_endpoint = BACKEND_ENDPOINT
else
    backend_endpoint = "https://dev.myapi.org"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you forget to set the value of &lt;code&gt;BACKEND_ENDPOINT&lt;/code&gt; for the container you deployed in production, what could happen?&lt;/p&gt;

&lt;p&gt;You'll end up with the production frontend communicating with the development backend, and you may not notice it at all!&lt;/p&gt;

&lt;p&gt;It would be better to have the app throw an error message &lt;code&gt;Error: BACKEND_ENDPOINT is not defined&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Second, keep your configuration clean. Delete dead lines of code, and be nice to developers by leaving comments if something is obscure. :)&lt;/p&gt;

&lt;p&gt;Third, maintain an architecture schema. Use something like &lt;a href="http://asciiflow.com/"&gt;asciiflow.com&lt;/a&gt; to draw a simple schema of your application's components, and add it to your source control.&lt;/p&gt;

&lt;p&gt;This will help people to understand the dependencies of your application.&lt;/p&gt;

&lt;p&gt;Since I discovered this tool, I have been a fan of it!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Everyone can edit the schema since it doesn't require having the source file, or specific software&lt;/li&gt;
&lt;li&gt;  Schemas are quick to draw&lt;/li&gt;
&lt;li&gt;  You can add it to your source control and track changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How can Humanitec help from your perspective? What are the main benefits?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, dynamic environment variables: I mentioned above that for the example app: "in production", the only change that will be made is changing &lt;code&gt;BACKEND_ENDPOINT&lt;/code&gt; value from &lt;code&gt;localhost:2000&lt;/code&gt; to its domain name, like &lt;a href="https://backend.endpoint.domain.org./"&gt;&lt;code&gt;https://backend.endpoint.domain.org&lt;/code&gt;.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the past, I always had to worry about the value of a variable like &lt;code&gt;BACKEND_ENDPOINT&lt;/code&gt;. If the domain name changes, I then had to report the value change in the deployment configuration. Since the frontend and backend are always deployed together, it would be great to be able to tell a system: "Deploy the backend, then when it's ready to accept connections put the value of the current endpoint in &lt;code&gt;BACKEND_ENDPOINT&lt;/code&gt;, then you can start the frontend". Humanitec provides a convenient feature for this. ‍&lt;/p&gt;

&lt;p&gt;Second, managed services: You can create a database directly in Humanitec and connect your application to it dynamically with the feature described above.&lt;/p&gt;

&lt;p&gt;Third, a great UI and easy to rollback state to a previous deployment. I spoke above about the fear of breaking something. Developers are humans after all. :)&lt;/p&gt;

&lt;p&gt;Even with the cleanest microservices architecture, you can end up having weird bugs. When multiple modules are connected together, it can happen that a colleague pushes a problematic change for a module you are relying on, but you are not aware of it.&lt;/p&gt;

&lt;p&gt;With Humanitec you can identify deployments for all connected modules with few clicks, and also rollback the entire group of modules to a previous state. This makes it a developer-friendly workplace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank you, Antoine!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The interview was conducted in writing and we want to thank Antoine for taking the time to answer the questions.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do you have more questions about environment variables? Humanitec's DevOps experts are happy to answer your questions during a free &lt;a href="https://humanitec.com/webinars"&gt;webinar&lt;/a&gt;!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
