<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ethan J. Jackson</title>
    <description>The latest articles on DEV Community by Ethan J. Jackson (@ethanjjackson).</description>
    <link>https://dev.to/ethanjjackson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ethanjjackson"/>
    <language>en</language>
    <item>
      <title>Developers won’t test if it’s too hard</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Thu, 10 Sep 2020 16:14:15 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/developers-won-t-test-if-it-s-too-hard-1b7d</link>
      <guid>https://dev.to/ethanjjackson/developers-won-t-test-if-it-s-too-hard-1b7d</guid>
      <description>&lt;p&gt;Projects need a story for local development. With some projects, the development environment is cobbled together and evolves organically. With others, a &lt;a href="https://kelda.io/blog/sres-should-manage-development-environments/"&gt;dedicated team&lt;/a&gt; designs and manages the dev environments.&lt;/p&gt;

&lt;p&gt;I've noticed a common theme in the ones I like. Good development environments make me &lt;strong&gt;confident that my code will work in prod&lt;/strong&gt; while giving me &lt;strong&gt;fast feedback&lt;/strong&gt;. These sorts of environments are fun to code in, since I can easily get into the &lt;a href="http://www.paulgraham.com/makersschedule.html"&gt;flow&lt;/a&gt; — I don't have to stop coding to wait for my code to deploy, or wait until it's merged to do a "proper" test.&lt;/p&gt;

&lt;p&gt;Recently, I've been thinking of &lt;strong&gt;test usefulness&lt;/strong&gt; and &lt;strong&gt;test speed&lt;/strong&gt; as the fundamental tradeoff in development environments. It's easy to build a robust testing environment that's slow to use. Or an environment that gives fast feedback, but forces you to test in a higher environment. The challenge is finding the right &lt;strong&gt;balance&lt;/strong&gt; between the two.&lt;/p&gt;

&lt;p&gt;In this post I'll explore why it's hard to have both, and what some companies have done about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How deployment pipelines help
&lt;/h2&gt;

&lt;p&gt;Before focusing on development environments, let's take a look at deployment pipelines through the lens of &lt;strong&gt;test usefulness&lt;/strong&gt; and &lt;strong&gt;test speed&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Most companies test code in a series of environments before deploying to production. In a well-designed deployment pipeline, you have high confidence that changes will work in prod once they reach the end of the pipeline.&lt;/p&gt;

&lt;p&gt;In the ideal world, you'd be able to instantly tell, with 100% certainty, whether changes will work in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GMrpdVea--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://kelda.io/img/blog/tradeoffs-ideal.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GMrpdVea--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://kelda.io/img/blog/tradeoffs-ideal.jpg" alt="Graph of possible dev environments, with test speed on x axis, test usefulness on y axis, and the ideal point in the top right"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, this is impossible to build in. Instead, we create deployment pipelines where &lt;em&gt;development&lt;/em&gt; is quick to test in, but isn't as similar to production. Once things are working in development, developers deploy them to &lt;em&gt;staging&lt;/em&gt;, which is as similar to production as possible, for final testing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Some companies have different names for these environments, or more environments, but the concept generally applies&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Staging environments: high confidence, slow feedback
&lt;/h3&gt;

&lt;p&gt;The most common reason bugs slip through to production is that &lt;a href="https://12factor.net/dev-prod-parity"&gt;they're tested in environments that aren't similar to production&lt;/a&gt;. If you don't test in an environment that's similar to production, then you can't really know how your code will behave once it gets deployed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Staging environments&lt;/em&gt; are the last place changes are tested before going live in production. They mimic production as closely as possible so that you can be confident that a change will work in production if it works in staging. The similarities between staging and prod should go deeper than just what code is running — VM configuration, load balancing, test data, etc should be similar.&lt;/p&gt;

&lt;p&gt;Staging environments live in the upper left of our "test usefulness vs speed" spectrum. They give you high confidence that your code will work in prod, but they're too difficult to do active development in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AvqlWb9f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://kelda.io/img/blog/tradeoffs-staging.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AvqlWb9f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://kelda.io/img/blog/tradeoffs-staging.jpg" alt="Same graph of possible dev environments as previous, with staging point added in the top left"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, they hold a nugget of wisdom: the key to getting useful test results in development environments is to make them similar to production. For the rest of this post, I'll focus on &lt;strong&gt;dev-prod parity&lt;/strong&gt; as a proxy for &lt;strong&gt;test usefulness&lt;/strong&gt; since the former is easier to evaluate.&lt;/p&gt;

&lt;p&gt;Let's dig a bit deeper into staging environments to see what we do (and don't) want to replicate in development environments.&lt;/p&gt;

&lt;h4&gt;
  
  
  How is staging similar to production?
&lt;/h4&gt;

&lt;p&gt;Here are some common ways that staging environments match production.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They run the same &lt;strong&gt;deployment artifacts&lt;/strong&gt; (e.g. Docker images) for services.&lt;/li&gt;
&lt;li&gt;They run with the same constellation of &lt;strong&gt;service versions&lt;/strong&gt;. If you test with a dependency at v2.0 in staging, it better not be v1.0 in production.&lt;/li&gt;
&lt;li&gt;They run on the same type of &lt;strong&gt;infrastructure&lt;/strong&gt; (e.g. on a Kubernetes cluster running in AWS, where the worker VMs have the same sysctls).&lt;/li&gt;
&lt;li&gt;They have realistic &lt;strong&gt;data&lt;/strong&gt; in databases.&lt;/li&gt;
&lt;li&gt;They're tested with realistic &lt;strong&gt;load&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;They run services at &lt;strong&gt;scale&lt;/strong&gt; (e.g. with multiple replicas behind load balancers)&lt;/li&gt;
&lt;li&gt;If the application depends on &lt;strong&gt;third-party services&lt;/strong&gt; (like Amazon S3, Amazon Lambda, Stripe, or Twilio), they make calls to real instances of these dependencies rather than mocked versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The relative importance of these factors varies depending on the application and its architecture. But it's useful to keep in mind the factors that you deem important, because you may want your development environment to mimic production in the same way.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why not just use staging for development?
&lt;/h4&gt;

&lt;p&gt;Developing directly in the staging environment ruins the principle of having a final checkpoint before deploying to production, since it would be dirtied by in-progress code that's not ready to be released.&lt;/p&gt;

&lt;p&gt;But putting that aside, developing via a staging environment is &lt;strong&gt;slow&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The environment is shared by all developers, so testing is blocked for &lt;strong&gt;all developers&lt;/strong&gt; if any broken code is deployed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploying is slow&lt;/strong&gt; because it requires going through the full build process, even if you're just making a small change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging is difficult&lt;/strong&gt; since the code is running on infrastructure that developers aren't familiar with.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://kelda.io/whitepaper"&gt;Whitepaper: How Cloud Native kills developer productivity&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Development environments: a sea of tradeoffs
&lt;/h2&gt;

&lt;p&gt;Development environments don't need to be perfect replicas of productions to be useful. Parity with production follows the Pareto principle: &lt;strong&gt;20% of differences account for 80% of the errors&lt;/strong&gt;. Plus, deployment pipelines provide a "safety net", since even if a bug slips through development, it'll get caught in staging.&lt;/p&gt;

&lt;p&gt;This lets us cut some of the features of staging that decrease productivity during development. &lt;strong&gt;But what should we cut?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f-x8j5r6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://kelda.io/img/blog/tradeoffs-goal.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f-x8j5r6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://kelda.io/img/blog/tradeoffs-goal.jpg" alt="Same graph as previous, with goal area shaded around the ideal point"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The sweet spot for development environments is the shaded area around "ideal".&lt;br&gt;
We want our development environments to be much faster to test in than staging,&lt;br&gt;
and we're willing to sacrifice a bit of "test usefulness" to get that.&lt;/p&gt;

&lt;p&gt;Here are some common compromises teams make, allowing them to operate in the ideal area.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem: Slow preview time
&lt;/h3&gt;

&lt;p&gt;Nothing breaks your flow like having to wait 10 minutes to see if your change&lt;br&gt;
worked. By the time you're able to poke around and see that your change didn't work,&lt;br&gt;
you've already forgotten what you were going to try next.&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution: Hot reload code changes
&lt;/h4&gt;

&lt;p&gt;Docker containers are great since they let you deploy the &lt;em&gt;exact&lt;/em&gt; same image that you tested with into production. However, they're slow to build since they don't handle incremental changes very well. Doing a full image build to test code changes wastes a lot of time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/storage/bind-mounts/"&gt;Docker volumes&lt;/a&gt; let you sync files into containers without restarting them. This, combined with hot reloading code, can get preview times for code changes down to seconds.&lt;/p&gt;

&lt;p&gt;The downside is that this workflow doesn't let you test other changes to your service. For example, if you change your &lt;code&gt;package.json&lt;/code&gt;, your image won't get rebuilt to install the new dependencies.&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution: Have a separate development environment per developer
&lt;/h4&gt;

&lt;p&gt;It's tempting to share resources in development so that there's less to maintain, and less drift between developers. But the potential for developers to step on each other's toes and block each other outweighs the conveniences, in my opinion.&lt;/p&gt;

&lt;p&gt;The most common thing that drifts when using isolated environments is &lt;em&gt;service versions&lt;/em&gt;. If your development environment boots dependencies via a &lt;a href="https://medium.com/@mccode/the-misunderstood-docker-tag-latest-af3babfd6375"&gt;floating tag&lt;/a&gt;, images can get stale without developers realizing it. One solution is to use shared versions of services that don't change often (e.g. a login service).&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem: Cumbersome debugging
&lt;/h3&gt;

&lt;p&gt;Previewing code changes is only one part of the core development loop. If the&lt;br&gt;
changes don't work, you debug by getting logs, starting a debugger,&lt;br&gt;
and generally poking around. Too many layers of abstraction between the&lt;br&gt;
developer and their code make this difficult.&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution: Use simpler tools to run services
&lt;/h4&gt;

&lt;p&gt;Even if you use Kubernetes in production, you don't have to use Kubernetes in development. &lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose&lt;/a&gt; is a common alternative that's more developer-friendly since it just starts the containers on the local Docker daemon. Developers boot their dependencies with &lt;code&gt;docker-compose up&lt;/code&gt; and get debugging information through commands like &lt;code&gt;docker logs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;However, this may not work for applications that make assumptions about the infrastructure setup. For example, applications that rely on &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/"&gt;Kubernetes operators&lt;/a&gt; or a &lt;a href="https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh"&gt;service mesh&lt;/a&gt; may require those services to run in development as well.&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution: Run code directly in IDE
&lt;/h4&gt;

&lt;p&gt;In traditional monolithic development, many developers run their code directly&lt;br&gt;
from their integrated development environment (IDE). This is nice because IDEs have integrations with tools such as&lt;br&gt;
step-by-step debuggers and version control.&lt;/p&gt;

&lt;p&gt;Even if you're working with containers, you can run your &lt;em&gt;dependencies&lt;/em&gt;&lt;br&gt;
in containers, and run just the code you're working on via an IDE. You can&lt;br&gt;
then point your service at your dependencies by tweaking environment variables.&lt;br&gt;
With Docker Desktop, containers can even make requests back to the host&lt;br&gt;
via &lt;code&gt;docker.internal.host&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The downside of this approach is that your service is running in a&lt;br&gt;
substantially different environment, the networking is complicated, and&lt;br&gt;
versions of dependencies like shared libraries tend to drift.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation challenges
&lt;/h3&gt;

&lt;p&gt;Sometimes, you're forced to make compromises because it's just too hard to build the&lt;br&gt;
perfect development environment. Unfortunately, most companies need to invest&lt;br&gt;
in building custom tooling to solve the following problems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Working with non-containerized dependencies, like serverless:&lt;/strong&gt; Some teams just
point at a shared version of serverless functions, which quickly gets complicated
if they write to a database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Too many services to run them all during development:&lt;/strong&gt; Applications get so complex that the hardware on laptops isn't sufficient. Some companies run just a subset of services or &lt;a href="https://kelda.io/blog/eventbrite-interview-part-2/"&gt;move their development environment to the cloud&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Development data isn't realistic:&lt;/strong&gt; Because production data contains sensitive customer information, many development environments just use a small set of mock data for testing. Some teams set up automated jobs that back up and sanitize production data. Others point their development environments at databases in staging, which tend to be more similar to production.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Development environments are usually an afterthought compared to staging and&lt;br&gt;
production. They evolve haphazardly based on band-aid fixes. But developers&lt;br&gt;
spend the bulk of their time in development, so I think they should be&lt;br&gt;
designed consciously by making tradeoffs between &lt;strong&gt;test usefulness&lt;/strong&gt;&lt;br&gt;
and &lt;strong&gt;test speed&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Unnecessary differences between development and production cause bugs to slip through&lt;br&gt;
to staging and production. Therefore, differences between development and production should be &lt;strong&gt;intentional&lt;/strong&gt;, and designed to speed up development.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;What does your ideal development environment look like? What tradeoffs does it make?&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kelda.io/get-started/"&gt;Try Blimp&lt;/a&gt; for booting cloud dev environments that hot reload your changes instantly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/whitepaper/"&gt;Whitepaper: How Cloud Native kills developer productivity&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blog/eventbrite-interview-part-2/"&gt;Why Eventbrite runs a 700 node Kube cluster just for development&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blog/sres-should-manage-development-environments/"&gt;Why SREs should be responsible for development environments&lt;/a&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>webdev</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Why Eventbrite runs a 700 node Kube cluster just for development</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Thu, 27 Aug 2020 20:48:19 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/why-eventbrite-runs-a-700-node-kube-cluster-just-for-development-2l9p</link>
      <guid>https://dev.to/ethanjjackson/why-eventbrite-runs-a-700-node-kube-cluster-just-for-development-2l9p</guid>
      <description>&lt;p&gt;In &lt;a href="https://kelda.io/blog/eventbrite-interview/"&gt;Part 1 of this interview&lt;/a&gt; with Remy DeWolf, a principal engineer on the DevTools team at Eventbrite, we discussed what information factored into Eventbrite's decision to move their development environment to the cloud.&lt;/p&gt;

&lt;p&gt;The DevTools team at Eventbrite set out to build &lt;code&gt;yak&lt;/code&gt; because they had too many services to run locally. They knew they wanted to &lt;strong&gt;move their development environment to the cloud&lt;/strong&gt;, but it ended up &lt;code&gt;yak&lt;/code&gt; had additional benefits that ranged from &lt;strong&gt;sharing environments&lt;/strong&gt; to &lt;strong&gt;transitioning to working remotely due to COVID&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this post, we dig into &lt;strong&gt;how &lt;code&gt;yak&lt;/code&gt; works&lt;/strong&gt;, &lt;strong&gt;what it's like for devs to use it&lt;/strong&gt;, and &lt;strong&gt;how it's been received&lt;/strong&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Q&amp;amp;A
&lt;/h1&gt;

&lt;h2&gt;
  
  
  How is the Eventbrite application architected?
&lt;/h2&gt;

&lt;p&gt;This is a common story that you will find in a lot of startups. The founding engineers built a monolith and the strategy was to build features fast and capture the market. It was a very successful approach.&lt;/p&gt;

&lt;p&gt;As the company grew over time, having a large team working on the monolith became challenging. When reaching a certain size, it was also harder to keep scaling vertically.&lt;/p&gt;

&lt;p&gt;Over time, some of the monolith was migrated over to microservices. Now, new services are generally containerized, and the monolith is containerized in dev but not in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What prompted you to rehaul your dev envs and what problems did you set out to solve?
&lt;/h2&gt;

&lt;p&gt;See &lt;a href="https://kelda.io/blog/eventbrite-interview/#how-did-you-decide-it-was-time-to-build-yak"&gt;How did you decide it was time to build yak?&lt;/a&gt; from Part 1 of the interview.&lt;/p&gt;

&lt;h2&gt;
  
  
  How did you convince your company?
&lt;/h2&gt;

&lt;p&gt;In the beginning, we partnered with developers to help us focus on the most important features and also to keep them excited about the technology. Specifically, there was a great interest to learn more about Kubernetes. We also added &lt;a href="https://kelda.io/blog/eventbrite-interview/#how-did-you-collect-feedback-at-eventbrite"&gt;instrumentation&lt;/a&gt; to our developer tools so we could measure how much time developers were wasting.&lt;/p&gt;

&lt;p&gt;Once we understood how much time the developers spent waiting or dealing with issues, &lt;strong&gt;we had to make a call between spending money on cloud computing or wasting engineer time&lt;/strong&gt;. We presented our plan to the CTO and we got the green light to move forward.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://kelda.io/whitepaper/"&gt;How Cloud Native kills developer productivity&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Making the switch to microservices but think it’s too good to be true? Or you already made the switch but you’re starting to notice that local development is harder than it used to be. You’re not alone. &lt;a href="https://kelda.io/whitepaper/"&gt;Read more of this whitepaper&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s the developer workflow like with yak?
&lt;/h2&gt;

&lt;p&gt;Every morning, a developer has two options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reconnect to their previous session: this takes a few seconds and they can resume their work from where they left it the previous day.&lt;/li&gt;
&lt;li&gt;Update their local branch and update their remote docker images: this takes 5-7 minutes to get the environment updated.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From this point, all their containers are running remotely and they can work through the day. Here are some of the common operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Change code locally:&lt;/strong&gt; Changed files are automatically synced over to their remote containers. It usually takes a few seconds for the changes to be available. We use &lt;code&gt;rsync&lt;/code&gt; for this, which is very efficient. To keep it simple, we do a one-way sync (from laptop to remote container).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: this flow is much faster than the standard flow of building/pushing/deploying images, which in practice is hard to get under a minute for large applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Debug code:&lt;/strong&gt; Developers can add breakpoints in their code and attach to a running container to get a live debugging session. We provided a command that wrapped &lt;code&gt;kubectl attach&lt;/code&gt; under the hood.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run tests:&lt;/strong&gt; Developers can run unit tests locally, but any tests that require dependencies (such as a DB or Redis) can be run remotely in a pod. For integration tests, they can run tests in a specific pod and connect to the other services directly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of the frontend changes are done locally and don’t require the cloud. &lt;strong&gt;The cloud is very useful for backend development and running various tests.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;Every developer has their own namespace where they manage their remote developer environment. Kubernetes does the heavy lifting and yak simplify the management of their containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does the DevTool team interact with the development environments?
&lt;/h2&gt;

&lt;p&gt;Usually, we don’t need to interact with the dev env but only focus on the big picture, such as managing the clusters, and adding more features to the tools.&lt;/p&gt;

&lt;p&gt;We do support the developers, so sometimes we would directly connect to a namespace to troubleshoot some issues, via standard Kubernetes commands like &lt;code&gt;kubectl logs&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s the ongoing maintenance burden?
&lt;/h2&gt;

&lt;p&gt;In the beginning, it was a lot of work because we had a few issues where we didn’t understand the root cause, and we were also weak with the user documentation.&lt;/p&gt;

&lt;p&gt;Over time, we got this under control. The documentation was revamped and we built a good knowledge base about the most common errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  How has the environment changed from when you first designed it?
&lt;/h2&gt;

&lt;p&gt;Our approach was always to deliver incremental value by first focusing on a minimum viable product (MVP) and adding features over time. For these reasons, we made many changes from the original design.&lt;/p&gt;

&lt;p&gt;Here are a few interesting changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Our infrastructure was running on one EKS cluster originally. At one point, we had &lt;strong&gt;700 worker nodes, and 14,000 pods running&lt;/strong&gt;. We ran into performance and rate-limiting issues that made us reconsider this single-cluster approach. Over time, we switched to a &lt;strong&gt;multi-cluster architecture&lt;/strong&gt; where each cluster had no more than 200 nodes.&lt;/li&gt;
&lt;li&gt;Syncing the code directly into running containers could sometimes cause the container to crash if the changes made the application fail the probe check. After iterating a few times on how to solve this problem, we decided to set up a &lt;strong&gt;sidecar container&lt;/strong&gt; that is responsible for syncing the code.&lt;/li&gt;
&lt;li&gt;To persist data over time (for example to save the MySQL database files of a developer) we use Statefulsets backed by EBS volumes. However, AWS has a limitation around EBS volumes -- an application running in a pod on EKS must be on a node in the same availability zone (AZ) as the EBS volume. To solve this problem, we partitioned our EKS nodes &lt;strong&gt;per availability zone&lt;/strong&gt; and we used taints to make sure that our Statefulset would be in the same AZ.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Have there been any unexpected benefits?
&lt;/h2&gt;

&lt;p&gt;Yes,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sharing environments&lt;/strong&gt;:&lt;br&gt;
Have you ever heard a developer say &lt;em&gt;"but it worked for me locally when I ran the tests?"&lt;/em&gt; Consistency improves by running in the cloud. The ability to share developer environments proved to be very helpful when trying to understand test failures or work on issues that were hard to reproduce.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Working globally&lt;/strong&gt;:&lt;br&gt;
We have a globally distributed team but most of the test/QA infrastructure is in the US. Simple operations like resolving the application dependencies or downloading a Docker image requires a lot of networking round trips. If the network latency is poor, these operations are slow.&lt;/p&gt;

&lt;p&gt;By running on the cloud, the developer opens a connection to some container (with port forwarding or by getting a shell) and then they can run their commands from the same AWS region where the rest of the infrastructure is located. For our engineers based outside of the US, being able to develop on the cloud has been a huge improvement.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Transitioning during COVID&lt;/strong&gt;:&lt;br&gt;
When COVID happened, all developers switched to working remotely. For some, this meant sharing home internet connections with other households or moving back to their family. It would have been extremely difficult or impossible for some of them to run the developer environment locally. Operations such as pulling Docker images or resolving application dependencies would require gigabit of data on a daily basis. By developing on the cloud, the transition to remote work was fairly seamless and the developers were able to continue their work from home.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  Kelda and Eventbrite
&lt;/h1&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Kelda has collaborated with Eventbrite for a long time. We first met when we were building the predecessor to &lt;a href="https://kelda.io/blimp"&gt;Blimp&lt;/a&gt;, which moves your Docker Compose development environment into the cloud. Eventbrite had already built &lt;code&gt;yak&lt;/code&gt; internally, and we were trying to make a general solution. We’ve been trading ideas ever since.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blimp/docs/#/sign-up-try-blimp"&gt;Check out Blimp&lt;/a&gt; to get the benefits of &lt;code&gt;yak&lt;/code&gt; without having to build it yourself!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;Part 1 of the interview: &lt;a href="https://kelda.io/blog/eventbrite-interview/"&gt;Why managing dev environments is a full time job at Eventbrite&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://kelda.io/blimp/docs/#/usage"&gt;Blimp commands and usage&lt;/a&gt; in the Docs&lt;/p&gt;

&lt;p&gt;See if you're making any of these &lt;a href="https://kelda.io/blog/common-docker-compose-mistakes/"&gt;5 common Docker Compose mistakes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@remy.dewolf"&gt;Remy DeWolf's Medium&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why managing dev environments is a full time job at Eventbrite</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Thu, 13 Aug 2020 16:31:16 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/q-a-how-eventbrite-prioritizes-developer-productivity-33m0</link>
      <guid>https://dev.to/ethanjjackson/q-a-how-eventbrite-prioritizes-developer-productivity-33m0</guid>
      <description>&lt;p&gt;Deciding when to invest in developer productivity improvements is hard. If you’re on the ops side of things, you’re usually concerned about production and releases. If you’re a developer, you’re concerned about getting new features out as quickly as possible.&lt;/p&gt;

&lt;p&gt;Usually, teams make development productivity improvements in two situations. Either the fix is so small that you can just do it in addition to your other work, or development is so painful that making changes has ground to a halt.&lt;/p&gt;

&lt;p&gt;However, there’s still a large murky middle ground: how do you decide that it’s worth investing in a &lt;strong&gt;large change&lt;/strong&gt; to your development workflow &lt;strong&gt;before development has ground to a halt?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@remy.dewolf"&gt;Remy DeWolf&lt;/a&gt; spent three years making these sorts of decisions as a &lt;strong&gt;principal engineer on the DevTools team at Eventbrite&lt;/strong&gt;. He was part of the decision to build &lt;code&gt;yak&lt;/code&gt;, which moved Eventbrite’s development environment into the cloud. This was a highly calculated decision since it cost a few EC2 instances per engineer and &lt;code&gt;yak&lt;/code&gt; was built from scratch.&lt;/p&gt;

&lt;p&gt;In this first post, we’ll dig into how Remy made this tough decision, and got buy-in from the rest of the company. In our next post, we’ll get into the nitty-gritty on how their remote development environment works, and what it’s been like for developers.&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://kelda.io/blog/eventbrite-interview-part-2/"&gt;part 2 of this interview&lt;/a&gt; about Eventbrite’s specific setup and 3 unexpected benefits of remote dev environments.&lt;/p&gt;

&lt;h1&gt;
  
  
  Q&amp;amp;A
&lt;/h1&gt;

&lt;h2&gt;
  
  
  How is the Eventbrite application architected?
&lt;/h2&gt;

&lt;p&gt;This is a common story that you will find in a lot of startups. The founding engineers built a monolith and the strategy was to build features fast and capture the market. It was a very successful approach.&lt;/p&gt;

&lt;p&gt;As the company grew over time, having a large team working on the monolith became challenging. And after a certain size, it was also harder to keep scaling the monolith vertically.&lt;/p&gt;

&lt;p&gt;Over time, some of the monolith was migrated over to microservices. New services are generally containerized, and the monolith is containerized in development but not in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s your development environment setup now?
&lt;/h2&gt;

&lt;p&gt;Every engineer runs &lt;strong&gt;~50 containers&lt;/strong&gt; which corresponds to the monolith, the microservices, the data stores (MySQL, Redis, Kafka…) and various tools (logging, monitoring).&lt;/p&gt;

&lt;p&gt;Developers use &lt;code&gt;yak&lt;/code&gt; (which we built internally) to deploy and manage their remote containers.&lt;/p&gt;

&lt;p&gt;We use AWS EKS for the Kubernetes clusters, in which every developer has their own namespace. We have hundreds of developers and many EKS clusters.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yak&lt;/code&gt; is very similar to &lt;a href="https://kelda.io/blimp"&gt;blimp&lt;/a&gt; since it enables the engineers to manage their remote containers without exposing them to the complexity of Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How did you decide it was time to build yak?
&lt;/h2&gt;

&lt;p&gt;Before &lt;code&gt;yak&lt;/code&gt;, each developer ran their development environment locally on their laptop. However, the development environment became so big that it slowed down developer laptops.&lt;/p&gt;

&lt;p&gt;The main issue was that you might not realize that this was an issue because it was creeping one service at a time.&lt;/p&gt;

&lt;p&gt;Once we added instrumentalization to our tools, we started to understand the scale of the problems. Moving to the cloud is expensive but when we were able to put it side by side with the wasted engineering time, the decision was easy for us.&lt;/p&gt;

&lt;p&gt;Another goal of &lt;code&gt;yak&lt;/code&gt; was to make Kubernetes easy for developers. We kept it as minimal as possible and the configuration files are plain Kubernetes manifest files. The intent was to feed developer curiosity so they learn more about Kubernetes over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What areas do you recommend tracking regarding developer productivity?
&lt;/h2&gt;

&lt;p&gt;Whenever possible, align the developer productivity goals with the business. Every DevTool team should understand how they contribute to the company goals and vice versa. If this is unclear, I would start with that.&lt;/p&gt;

&lt;p&gt;Next, make sure that developer productivity is part of the plan, not an afterthought. For example, some engineering teams move to microservices and only track the number of services and the uptime in production. These are great metrics, but they’re incomplete. They will generate inconsistency and the developer experience will suffer over time.&lt;/p&gt;

&lt;p&gt;In terms of which metrics to pick, there is no general recommendation. It’s important to understand how developers work, understand how frequently they perform critical tasks, and instrument the tools that they use. With this data, you will be able to identify the most important areas to invest and track the progress over time.&lt;/p&gt;

&lt;p&gt;I would also recommend having a metric about mean time to recovery (MTTR). If a developer is completely stuck, how would you bring them back to a clean state so they can resume their work? For this one, if you run the developer environment locally, you will have many different combinations of OS/tools/versions resulting in many different issues. If you are on the cloud and use a generic solution (e.g. Docker + Kubernetes), this problem will be much easier to solve.&lt;/p&gt;

&lt;h2&gt;
  
  
  How did you collect feedback at Eventbrite?
&lt;/h2&gt;

&lt;p&gt;We had many channels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instrumentation into the tools. Every time a developer would build, run, or deploy docker images we would send metrics. Similarly, every CI job would do the same. Then we would generate some dashboards for the metrics to track and measure the progress over time. If you are using a tool like Sumologic or Datadog, it’s very easy to send custom metrics and build dashboards.&lt;/li&gt;
&lt;li&gt;Quarterly engagement surveys.&lt;/li&gt;
&lt;li&gt;Demos: invite other engineers to show them the progress and engage with them.&lt;/li&gt;
&lt;li&gt;New hires: these new employees bring a fresh perspective and they are not afraid to ask questions and challenge the status quo.&lt;/li&gt;
&lt;li&gt;Networking: build relationships with other developers (coffee breaks, office visits, lunches, etc..)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Can you give some examples of developer productivity OKRs?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Time to start the developer environment is under x min&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This time is usually wasted time, so it’s important to track it and improve it. If the dev stack is unreliable or slow, it would be captured in this OKR.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Engagement is over x%&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you send an engagement survey every quarter, you can have an OKR to make sure the trend is upward. Seeing a drop would mean that the team might not be working on the most relevant projects.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Average time from commit to QA/Prod&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one will capture the CI/CD pipeline effectiveness. If you experience some flaky tests or deployment errors in the pipeline, it would negatively impact the key results.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Over time, some OKRs will be exhausted, so consider renewing them over time. For example, if your survey always has the same questions, developers will eventually stop responding. Also if an OKR has been greatly improved, it’s a good time to shift priorities.&lt;/p&gt;

&lt;p&gt;In my personal experience, I would focus on a few OKRs instead of having too many. Sometimes by trying to please everybody, you will not have a big impact. Some projects might require the full team focus, which can temporarily impact other OKRs. This would be a calculated strategy as these projects would bring huge improvements when delivered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Are there any warning signs people should look out for in order to know their developer productivity is suffering?
&lt;/h2&gt;

&lt;p&gt;This is where it’s important to have good metrics and monitor them over time. You should be able to feel the pulse of your developers by looking at different data points. Ideally, you would tie these to your OKRs and review the progress every sprint and make adjustments.&lt;/p&gt;

&lt;p&gt;If you don’t have this data there are still warning signs that productivity is suffering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increase in support cases and/or requests for help. If developers need external help to do their work, this is a sign that a process is too hard to use or not well documented.&lt;/li&gt;
&lt;li&gt;On the other hand, I’d be worried if you find out that some processes aren’t working properly but nobody reported them to your team. You want developers to be always looking for improvements and not accepting a broken process.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Kelda and Eventbrite
&lt;/h1&gt;

&lt;p&gt;Kelda has collaborated with Eventbrite for a long time. We first met when we were building the predecessor to &lt;a href="https://kelda.io/blimp"&gt;Blimp&lt;/a&gt;, which moves your Docker Compose development environment into the cloud. Eventbrite had already built &lt;code&gt;yak&lt;/code&gt; internally, and we were trying to make a general solution. We’ve been trading ideas ever since.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blimp"&gt;Check out Blimp&lt;/a&gt; to get the benefits of &lt;code&gt;yak&lt;/code&gt; without having to build it yourself!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;Read &lt;a href="https://kelda.io/blog/eventbrite-interview-part-2/"&gt;part 2 of this interview&lt;/a&gt; about Eventbrite’s specific setup and 3 unexpected benefits of remote dev environments.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://kelda.io/blimp/docs/#/usage"&gt;Blimp commands and usage&lt;/a&gt; in the Docs&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://kelda.io/blog/common-docker-compose-mistakes/"&gt;5 common Docker Compose mistakes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@remy.dewolf"&gt;Remy DeWolf's Medium&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>microservices</category>
      <category>architecture</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why SREs Should be Responsible for Development Environments</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Sat, 01 Aug 2020 16:35:28 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/why-devops-should-be-responsible-for-development-environments-4ifp</link>
      <guid>https://dev.to/ethanjjackson/why-devops-should-be-responsible-for-development-environments-4ifp</guid>
      <description>&lt;p&gt;Let's discuss an extremely common anti-pattern I've noticed with teams that are relatively new to containers/cloud-native/kubernetes, etc. More so than when building traditional monoliths, cloud-native applications can be incredibly complex and, as a result, need a relatively sophisticated development environment. Unfortunately, this need often isn't evident at the beginning of the cloud-native journey. Development environments are an afterthought – a cumbersome, heavy, brittle drag on productivity.&lt;/p&gt;

&lt;p&gt;The best teams treat development environments as a priority and devote significant DevOps/SRE time to perfecting them. In doing so, they end up with development environments that "just work" for every developer, not just those who are experienced with containers and Kubernetes. For these teams, every developer has a fast, easy-to-use development environment that works for every developer every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's a development environment?
&lt;/h2&gt;

&lt;p&gt;Before we go further, let's get on the same page about what we mean by a development environment in this context. When working with cloud-native applications, each service depends on numerous containers, serverless functions, and cloud services to operate. For this post, a development environment is a sandbox in which developers can run their code and dependencies for testing. It's not the IDE, compiler, debugger, or any of those other tools.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://kelda.io/whitepaper/"&gt;Whitepaper: Why Cloud Native kills developer productivity&lt;/a&gt;
&lt;/h4&gt;

&lt;h2&gt;
  
  
  Sound Familiar?
&lt;/h2&gt;

&lt;p&gt;You're working on a new project or planning to modernize an old one. The team has read all about the whiz-bang nifty new cloud-native technologies, like containers, Kubernetes, etc. So, you decide to take the plunge and build a cloud-native app.&lt;/p&gt;

&lt;p&gt;The team realizes that a core group of DevOps/SREs will be necessary to get everything running in a scalable, reliable, and automated setup. Site reliability engineers are hired/trained and get to work. They setup up Kubernetes, CI/CD, monitoring, logging, and all of the other tools we've learned are critical for a modern application.&lt;/p&gt;

&lt;p&gt;Everyone knows that it's the DevOps/SRE team's job to get all of this stuff up and running. However, development environments aren't top of mind. The site reliability team considers it their duty to focus on production and CI/CD – Development is the developer's job. At the same time, the developers think it's their job to deliver application features, not to maintain infrastructure. It's not really anyone's responsibility to focus on developer experience before CI/CD, so it's neglected.&lt;/p&gt;

&lt;p&gt;Unfortunately, an ad hoc approach to development environments tends to emerge. Whenever there's a new service, whatever developer happens to be working on it, realizes they need some way to boot their dependencies and test their code. They Google around and figure that Docker Compose is a reasonable way to do this. They copy and paste some example, tweak through trial and error until it's working, and move on. The quality fo this initial compose file ranges widely depending on the DevOps knowledge of the engineer who happened to write it. Sometimes it's pretty solid; sometimes, it's brittle and slow.&lt;/p&gt;

&lt;p&gt;Worse, this process repeats. Every time there's a new service, it gets a new git repository, and some new engineer finds themselves writing a compose file. Perhaps this new file is copied from an existing project. Perhaps it's developed from scratch. Either way, now we have two compose files that need to be maintained and updated as the app changes over time. This process repeats and repeats until all services have their own ever so slightly different configuration files that are a nightmare to maintain.&lt;/p&gt;

&lt;p&gt;As a result of this (all too common) process.  We see several typical issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Development environments are unmanageable. They spread across dozens of repositories in dozens of subtly different copy-and-pasted docker-compose files.  Keeping these up to date in a fast-changing application is impossible.&lt;/li&gt;
&lt;li&gt;Development environments are incomplete.  They only deal with containers because they are the easiest for an individual developer to get up and running with docker-compose.  Everything else developers need to test (serverless functions, databases, specialty cloud services) requires manual effort.&lt;/li&gt;
&lt;li&gt;Developers waste time focusing on things that aren't their specialization. Just as most backend engineers can't CSS their way out of a paper bag, there's no reason for every frontend/AI/data engineer to be experts on the current DevOps trends.  Developers shouldn't spend time configuring and debugging development environments — they should spend time building features.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Managed Development Environments
&lt;/h2&gt;

&lt;p&gt;So how do we avoid this all-too-common scenario? The good news is that it's not particularly challenging to do so if you're intentional and proactive. The best teams tend to follow a couple of principles to ensure a great experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clear Responsibility&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;There's a team that is explicitly responsible for providing development environments for all developers. That team can be the DevOps/SRE team, or a dedicated developer productivity team. The key is that it's someone's job to focus on this issue. Furthermore, that person is likely someone with a large amount of DevOps expertise that will produce better outcomes more efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Central Management&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;The development environment must be managed centrally by the site reliability team responsible for it. A single git repository contains all of the configuration and scripts necessary for a developer to get going. When the site reliability team changes something, they do so once in that central repository, and all developers benefit. Furthermore, typically, the development environments run in a centrally managed cluster in the cloud. As a result, it's easy for the site reliability team to ensure things work consistently for everyone, and debug problems when they do arise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full Automation&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Their development environments are fully automated. A single command brings up everything a developer needs to test their code. Developers don't need to do nearly any manual setup work beyond the code changes they're actively working on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Achieving these goals isn't easy. It requires a significant and sustained investment from the site reliability team, and buy-in from developers and management to succeed. However, while the cost can be significant, it's small relative to the wasted time and effort saved by giving every developer a fast environment that just works every time. At &lt;a href="https://kelda.io/"&gt;Kelda&lt;/a&gt;, we're working hard to make this dream attainable for every developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blimp/docs/sign-up-try-blimp/"&gt;Try Blimp&lt;/a&gt; to see how you can improve development speed&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blog/how-docker-stores-registry-credentials/"&gt;Read more about Docker internals&lt;/a&gt; -- see how registry credentials are stored.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blog/docker-volumes-for-development/"&gt;Tutorial: How to Use Docker Volumes to Code Faster&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;By: Ethan Jackson&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/home"&gt;Follow us on Twitter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally: &lt;a href="https://kelda.io/blog/devops-should-manage-development-environments/"&gt;https://kelda.io/blog/devops-should-manage-development-environments/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>architecture</category>
      <category>productivity</category>
      <category>sre</category>
    </item>
    <item>
      <title>How We Cut Our Docker Push Time by 90%</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Thu, 23 Jul 2020 14:52:18 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/how-we-cut-our-docker-push-time-by-90-4o69</link>
      <guid>https://dev.to/ethanjjackson/how-we-cut-our-docker-push-time-by-90-4o69</guid>
      <description>&lt;p&gt;At Kelda we're building Blimp, a version of Docker Compose that runs in the cloud. Our goal is to improve the development productivity by providing developers with an alternative to bogging down their local systems with loads of resource-hungry Docker containers.&lt;/p&gt;

&lt;p&gt;We've put a lot of engineering effort into supporting all of the Docker Compose fields commonly used during local development, such as &lt;code&gt;volumes&lt;/code&gt;, &lt;code&gt;ports&lt;/code&gt;, and &lt;code&gt;build&lt;/code&gt;. In this post I'll talk a bit about what we've gleaned from the experience as it relates to Docker Compose's &lt;code&gt;build&lt;/code&gt; functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
    service:
        build: .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a service has a &lt;code&gt;build&lt;/code&gt; field, Blimp builds your images locally, and pushes them to the cloud so that they can be pulled by the development environment. This push can be frustratingly slow, especially on home networks. Waiting 30 minutes for the image to upload before being able to start developing was just unacceptable to us.&lt;/p&gt;

&lt;p&gt;To be fair, Docker already has some image optimizations built in, but it didn't do exactly what we wanted out of the box. So, we set out to optimize the push process. To achieve this, we had to dive deep into Docker's image push API.&lt;/p&gt;

&lt;p&gt;In this post, I'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What exactly happens when you do a &lt;code&gt;docker push&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;How we used this to build our pre push feature and decrease image push times by 90%.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Images are Layers
&lt;/h2&gt;

&lt;p&gt;Before diving into the image push API, you first need to understand what a Docker image actually is.&lt;/p&gt;

&lt;p&gt;It's common for developers to think of Docker images like operating system images or ISOs -- a static snapshot of a filesystem that represents the container. Really though, Docker images are quite a bit more sophisticated than that.&lt;/p&gt;

&lt;p&gt;A Docker image is made up of &lt;em&gt;layers&lt;/em&gt; of filesystems. Put simply, each line in a Dockerfile can be thought of as a layer, and the sum of all the layers the Dockerfile defines is the resulting image.&lt;/p&gt;

&lt;p&gt;For example, in the following, &lt;code&gt;FROM python&lt;/code&gt; is telling Docker to lay the foundation of our image with the existing Python layers. Likewise, &lt;code&gt;COPY . .&lt;/code&gt; creates a new layer which contains all the files in &lt;code&gt;.&lt;/code&gt; (i.e. the current working directory, which is referred to as the &lt;em&gt;build context&lt;/em&gt;), and overlays them on top of any existing layers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python
COPY . .
CMD python app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Python base image is &lt;strong&gt;934MB&lt;/strong&gt;. Assuming that the user is copying in 2MB of files, the base image would make up &lt;strong&gt;99%&lt;/strong&gt; of the resulting image!&lt;/p&gt;

&lt;p&gt;This provided us with a really interesting opportunity to optimize. Why should we waste a user's precious bandwidth pushing this entire image, when often times the vast majority of it is already available from public sources?&lt;/p&gt;

&lt;p&gt;Our solution is to have users only push the bits of the image that are unique to their build, and then automatically fetch the rest directly from the base image's registry (e.g. DockerHub), which has plenty of bandwidth.&lt;/p&gt;

&lt;p&gt;Bringing it back to the Python example above, we want to make it so that the &lt;code&gt;python&lt;/code&gt; layers aren't uploaded over the user's network. Instead, our servers will "pre push" the layers from our high bandwidth servers. Then, the user's &lt;code&gt;docker push&lt;/code&gt; just needs to push the layer for &lt;code&gt;COPY . .&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The good news for us was that out of the box, Docker only pushes the layers that don't already exist in the registry. Each layer has a &lt;code&gt;digest&lt;/code&gt;, which represents the contents of the layer. These digest IDs are used before pushing to figure out if the registry already has that layer -- if it does, then the client doesn't bother pushing the layer's contents.&lt;/p&gt;

&lt;p&gt;But we still had to design a way to prepopulate the base image layers in the registry so that the Docker Push API would reuse them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Docker Push API
&lt;/h2&gt;

&lt;p&gt;Docker pushes images in two parts:  first it uploads the &lt;em&gt;layers&lt;/em&gt; described above. Then, once all the layers are uploaded, it uploads the &lt;a href="https://docs.docker.com/registry/spec/api/#manifest" rel="noopener noreferrer"&gt;signed manifest&lt;/a&gt;, which references the layers and has some additional metadata.&lt;/p&gt;

&lt;h3&gt;
  
  
  Simple Layer Caching
&lt;/h3&gt;

&lt;p&gt;Each layer upload starts off with a &lt;code&gt;HEAD&lt;/code&gt; request that checks whether the layer already exists in the registry.&lt;/p&gt;

&lt;p&gt;If the layer already exists in the registry, then the registry responds with a &lt;code&gt;200 OK&lt;/code&gt; response, and the Docker client doesn't bother pushing it again. In these situations, &lt;code&gt;docker push&lt;/code&gt; shows the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;6b73f8ddd865: Layer already exists
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the layer &lt;em&gt;doesn't&lt;/em&gt; exist, then the registry responds with &lt;code&gt;202 Accepted&lt;/code&gt;, along with the URL that should be used for uploading the layer. The client then uploads the image in chunks via &lt;code&gt;PATCH&lt;/code&gt; requests, or directly via a single &lt;code&gt;PUT&lt;/code&gt; request.&lt;/p&gt;

&lt;p&gt;This layer checking only works when the layers in question exist in the same repository as the image being pushed. So &lt;code&gt;blimp/backend:1&lt;/code&gt; and &lt;code&gt;blimp/backend:2&lt;/code&gt; can share layers, but &lt;code&gt;blimp/backend:1&lt;/code&gt; can't share layers with &lt;code&gt;blimp/another-image:1&lt;/code&gt; (without taking advantage of another API, that I'll describe now).&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross Repository Mounts
&lt;/h3&gt;

&lt;p&gt;You may have seen the following output when running &lt;code&gt;docker push&lt;/code&gt; before. This output means that the push is making use of &lt;a href="https://github.com/docker/distribution/issues/634" rel="noopener noreferrer"&gt;cross repository mounts&lt;/a&gt;, which is a cool feature to cache layers across &lt;em&gt;multiple&lt;/em&gt; images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;e1c75a5e0bfa: Mounted from library/ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This feature was introduced in &lt;a href="https://github.com/docker/distribution/releases/tag/v2.3.0" rel="noopener noreferrer"&gt;Docker Registry v2.3.0&lt;/a&gt;. Cross repository mounts allow clients to inform the registry that they know about another image in the registry that may share the same layer, and that the registry should try using the layer from that image rather than going through the full upload process.&lt;/p&gt;

&lt;p&gt;When Docker receives this request, it first makes sure that the client has pull access to this other repository. If the client has access, and the layers match up, the registry sends back a &lt;code&gt;201 Created&lt;/code&gt; response. Otherwise, it sends a &lt;code&gt;202 Accepted&lt;/code&gt; response, and the client goes through the full upload process described above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkelda.io%2Fimg%2Fblog%2FDocker%2520Push%2520Diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkelda.io%2Fimg%2Fblog%2FDocker%2520Push%2520Diagram.png" alt="Docker Push Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing Blimp
&lt;/h2&gt;

&lt;p&gt;If you use a custom Docker image for development, Blimp will automatically build and push the image when you start up your sandbox. The image for each service is pushed to &lt;code&gt;blimp-registry.kelda.io/&amp;lt;sandboxID&amp;gt;/&amp;lt;service&amp;gt;:&amp;lt;imageID&amp;gt;&lt;/code&gt;, where  is a unique identifier for your sandbox, and  is a hash to make sure we always run the latest version of your image.&lt;/p&gt;

&lt;p&gt;As a reminder, our goal for looking into all this is to make it so that when you push this image, you only have to push the "unique" layers that can't be pulled from more efficient sources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial Design
&lt;/h3&gt;

&lt;p&gt;At first, we wanted to make use of &lt;em&gt;cross repository mounts&lt;/em&gt;. This would let all our users share the same base images, so we would only have to push the base image for the &lt;em&gt;very first user&lt;/em&gt; that references it. Plus, it'd set us up to build private image caches for teams so that they could share layers from their Dockerfile other than the base image.&lt;/p&gt;

&lt;p&gt;We were hoping to do something like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Analyze the image's Dockerfile to find out what its base image is.&lt;/li&gt;
&lt;li&gt;Send a request to our server to push this base image to the registry with the name &lt;code&gt;blimp-registry.kelda.io/public/&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Tag the base image locally with &lt;code&gt;blimp-registry.kelda.io/public/&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;&lt;/code&gt; so that Docker would provide it as a cross repository mount.&lt;/li&gt;
&lt;li&gt;Push the image with &lt;code&gt;docker push&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Unfortunately, step 3 didn't actually cause Docker to provide the pre pushed base image as a cross repository mount. Docker only updates its list of images used for cross repository mounts on the &lt;strong&gt;first time a layer is pushed or pulled&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We considered giving users push access to the public repo, but we deemed that too insecure. We also considered ditching &lt;code&gt;docker push&lt;/code&gt; entirely in favor of &lt;a href="https://github.com/google/go-containerregistry" rel="noopener noreferrer"&gt;go-containerregistry&lt;/a&gt;, but that would have entailed making a significant change to &lt;code&gt;go-containerregistry&lt;/code&gt; in order to show image push updates.&lt;/p&gt;

&lt;p&gt;So, we went back to the drawing board.&lt;/p&gt;

&lt;h3&gt;
  
  
  Revised Design
&lt;/h3&gt;

&lt;p&gt;After giving up on cross repository mounts, we asked: why bother with cross repository mounts when we could just push directly to the user's repository?&lt;/p&gt;

&lt;p&gt;Although our servers would have to push a copy of the base image for &lt;em&gt;each user&lt;/em&gt;, this is still much more efficient than having the user push it directly from their laptop since the bandwidth between our servers and the registry is so much higher.&lt;/p&gt;

&lt;p&gt;Ultimately, that's what we settled on. The repository for each service (&lt;code&gt;blimp-registry.kelda.io/&amp;lt;sandboxID&amp;gt;/&amp;lt;service&amp;gt;&lt;/code&gt;) always has a &lt;code&gt;base&lt;/code&gt; tag that our servers push the base image to. The registry then automatically references it during the normal push API outlined above whenever the user pushes their image -- no icky manipulation of Docker's state necessary.&lt;/p&gt;

&lt;p&gt;Putting it all together, this is what happens when Blimp pushes a locally built image:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Blimp CLI parses the reference to the base image from the image's Dockerfile.&lt;/li&gt;
&lt;li&gt;The Blimp CLI tells the Blimp servers to push the base image to &lt;code&gt;blimp-registry.kelda.io/&amp;lt;sandboxID&amp;gt;/&amp;lt;service&amp;gt;:base&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The Blimp CLI builds the image, using the same base image.&lt;/li&gt;
&lt;li&gt;The Blimp CLI pushes the full image to &lt;code&gt;blimp-registry.kelda.io/&amp;lt;sandboxID&amp;gt;/&amp;lt;service&amp;gt;:&amp;lt;imageID&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Docker goes through the layers one by one, and pushes them. If the layer is from the base image, the registry notices and instructs the CLI to skip the push.&lt;/li&gt;
&lt;li&gt;For the layers not in the base image, Docker does the full upload process.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At Blimp, we want to make moving your development environment to the cloud as seamless as possible. One of our &lt;a href="https://kelda.io/blimp/docs/#design-principles" rel="noopener noreferrer"&gt;design principles&lt;/a&gt; is that the move should use the exact same config, and not require any changes to your workflow. Although we could have users work around the push slowness by prebuilding and pushing images to a shared public repository, that would violate our design goals. Building this feature was a fun deep dive into Docker internals, and a big step towards making the onboarding process to Blimp seamless.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;See how fast it is yourself! &lt;a href="https://kelda.io/blimp/docs/sign-up-try-blimp/" rel="noopener noreferrer"&gt;Try an example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blog/how-docker-stores-registry-credentials/" rel="noopener noreferrer"&gt;Read more about Docker internals&lt;/a&gt; -- see how registry credentials are stored.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/registry/spec/api/#pushing-an-image" rel="noopener noreferrer"&gt;Read the spec for the Docker Push API&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;By: Christopher Cooper&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Using Data Containers To Boot Your Development Environment In Seconds</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Thu, 09 Jul 2020 14:21:06 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/using-data-containers-to-boot-your-development-environment-in-seconds-4c9h</link>
      <guid>https://dev.to/ethanjjackson/using-data-containers-to-boot-your-development-environment-in-seconds-4c9h</guid>
      <description>&lt;p&gt;One of the most time consuming parts of booting a Docker development environment is initializing databases. The &lt;em&gt;data container pattern&lt;/em&gt; gets around this obstacle by taking advantage of some less known features of volumes. With data containers, you can easily &lt;strong&gt;distribute, maintain, and load your database's seed data&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Data containers are a commonly overlooked tool for building the nirvana of development environments: &lt;strong&gt;booting the environment with a single command that works every time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You're probably already using volumes to save you some time working with databases during development; if one of your development containers crashes, volumes will prevent you from losing the database's state.  But interestingly, Docker volumes have some cool quirks that we can leverage for the data containers pattern.&lt;/p&gt;

&lt;p&gt;In this post, I'll:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explain why data containers are the best way to initialize your databases.&lt;/li&gt;
&lt;li&gt;Explain how data containers work by taking advantage of some unusual behavior that isn't present in other volume implementations, such as Kubernetes volumes.&lt;/li&gt;
&lt;li&gt;Walk you through a brief example on how to do it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Just want the code?&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/kelda/magda"&gt;Get it here&lt;/a&gt; and boot it with &lt;code&gt;docker-compose up&lt;/code&gt; (or &lt;code&gt;blimp up&lt;/code&gt;!)&lt;/p&gt;
&lt;h2&gt;
  
  
  Standard Techniques for Initializing Databases
&lt;/h2&gt;

&lt;p&gt;When developing using Docker, there are three approaches developers commonly use for setting up their databases. All of them have serious drawbacks.&lt;/p&gt;
&lt;h3&gt;
  
  
  1) Initialize Your Database By Hand
&lt;/h3&gt;

&lt;p&gt;Most people start by setting up their databases by hand. But this has several serious drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This approach can be very time-consuming. For example, it's easy to spend an entire day copying the data you need from staging and figuring out how to seed your database with it. And, if you lose your volume, you have to do it all over again.&lt;/li&gt;
&lt;li&gt;It's hard to sustain over time. I once worked with a team that dreaded destroying their database since they knew they'd have to re-initialize it later. As a result, they would do their best to avoid avoiding working on certain projects so they could spare themselves the pain of destroying and initializing their database.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  2) Initialize Your Database Using a Script
&lt;/h3&gt;

&lt;p&gt;Using a script can save you a lot of manual work. But it comes with its own set of headaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The script may take a while to run, slowing down the environment boot time.&lt;/li&gt;
&lt;li&gt;In the rush of all the other work developers have to do, it's easy to put off maintaining the script. As the database's schema changes over time, the script breaks, and then you have to spend time debugging it.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  3) Use a Remote Database
&lt;/h3&gt;

&lt;p&gt;Using a remote database -- typically your staging database --  is certainly faster than running scripts or initializing your database by hand. But there's a big downside: you're sharing the database with other developers. That means you don't have a stable development environment. All it takes is one developer mucking up the data to ruin your day.&lt;/p&gt;
&lt;h2&gt;
  
  
  A Better Way: Data Containers
&lt;/h2&gt;

&lt;p&gt;Data containers are containers that store your database's state, and are deployed like any other container in your Docker Compose file. They take advantage of some quirks of Docker volumes to copy the data from the container into the database, so that the database is fully initialized when it starts.&lt;/p&gt;

&lt;p&gt;To see how volumes can speed up your development work with databases, let's take an example from the &lt;a href="https://magda.io/"&gt;Magda&lt;/a&gt; data catalog system. Here's a snippet from the Magda &lt;a href="https://github.com/kelda/magda"&gt;Docker Compose&lt;/a&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gcr.io/magda-221800/magda-postgres:0.0.50-2"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;db-data:/data'&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PGDATA=/data"&lt;/span&gt;

  &lt;span class="na"&gt;postgres-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gcr.io/magda-221800/magda-postgres-data:0.0.50-2"&lt;/span&gt;
    &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tail&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-f&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/dev/null"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;db-data:/data'&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When you run &lt;code&gt;docker-compose up&lt;/code&gt; in the Magda repo, all the Magda services start, and the Postgres database is automatically initialized.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;This setup takes advantages of two features of Docker volumes:&lt;/p&gt;

&lt;p&gt;1) &lt;strong&gt;Docker copies any files masked by volumes into the volume&lt;/strong&gt;. The Magda example has the following in its Docker Compose file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;postgres-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gcr.io/magda-221800/magda-postgres-data:0.0.50-2"&lt;/span&gt;
    &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tail&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-f&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/dev/null"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;db-data:/data'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;postgres-data&lt;/code&gt; starts, it mounts a volume to &lt;code&gt;/data&lt;/code&gt;. Because we built the &lt;code&gt;gcr.io/magda-221800/magda-postgres-data&lt;/code&gt; image to already have database files at &lt;code&gt;/data&lt;/code&gt;, Docker copies those files into the volume.&lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;Volumes can be shared between containers.&lt;/strong&gt; So any files written to &lt;code&gt;db-data&lt;/code&gt; by &lt;code&gt;postgres-data&lt;/code&gt; are visible in the &lt;code&gt;postgres&lt;/code&gt; container because the &lt;code&gt;postgres&lt;/code&gt; container also mounts the &lt;code&gt;db-data&lt;/code&gt; volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  postgres:
    image: "gcr.io/magda-221800/magda-postgres:0.0.50-2"
    environment:
      - "PGDATA=/data"
    volumes:
      - 'db-data:/data'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Putting this all together, when you run &lt;code&gt;docker-compose up&lt;/code&gt;, the following happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker copies &lt;code&gt;/data&lt;/code&gt; from &lt;code&gt;postgres-data&lt;/code&gt; to the &lt;code&gt;db-data&lt;/code&gt; volume&lt;/li&gt;
&lt;li&gt;Docker starts the &lt;code&gt;postgres&lt;/code&gt; container.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;postgres&lt;/code&gt; container starts, and boots with the data in its data directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, instead of having to spend a lot of time repeatedly initializing your databases by hand or creating and maintaining scripts, with remarkably little work you are good to go. You've got a fully automated system in place that will work every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;p&gt;This approach has 3 major benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's a huge timesaver for developers -- booting is now super quick, and developers don't have to manually add data or create and maintain scripts.&lt;/li&gt;
&lt;li&gt;It ensures that everyone on your team is working with an identical set of data. To get the newest version of the data, all you need to do is &lt;code&gt;docker pull&lt;/code&gt; the data image, just like any other container. In fact, &lt;code&gt;docker-compose&lt;/code&gt; will do that for them, so they don’t even have to think about it when onboarding.&lt;/li&gt;
&lt;li&gt;It's easy to automate. There's a lot of existing tooling for automating Docker builds, which you can take advantage of with this approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Downsides
&lt;/h2&gt;

&lt;p&gt;The main downside of this approach is that it can be hard to maintain the data container. Maintaining it by hand has the same downsides as initializing the database manually or with scripts -- the data can get stale as the db schema changes.&lt;/p&gt;

&lt;p&gt;Teams that use this approach tend to generate their data containers using CI. The CI job snapshots and sanitizes the data from production or staging, and pushes it to the Docker registry. This way, the container generation is fully automated, and developers don't have to worry about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Data containers are a cool example of how Docker Compose does so much more than just boot up containers. Used properly, it can substantially increase developer productivity.&lt;/p&gt;

&lt;p&gt;We're excited to share these developer productivity tips because we've noticed that the development workflow has become an afterthought during the move to containers. The complexity of modern applications requires new development workflows. We built &lt;a href="https://kelda.io/blimp"&gt;Blimp&lt;/a&gt; so that development teams can quickly build and test containerized software, without having to reinvent a development environment approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Check out another trick for increasing developer productivity by &lt;a href="https://kelda.io/blog/docker-volumes-for-development/"&gt;using host volumes get rid of container rebuilds&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blimp/docs/getting-started/"&gt;Try an example&lt;/a&gt; with Blimp to see how easily development on Docker Compose can be scaled into the cloud.&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://kelda.io/blog/common-docker-compose-mistakes/"&gt;common Docker Compose mistakes&lt;/a&gt; for more tips on how to make development faster.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Develop Your Python Docker Applications Faster</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Thu, 02 Jul 2020 00:49:16 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/how-to-develop-your-python-docker-applications-faster-4d1c</link>
      <guid>https://dev.to/ethanjjackson/how-to-develop-your-python-docker-applications-faster-4d1c</guid>
      <description>&lt;p&gt;Docker has many benefits that make deploying applications easier. But the process of developing Python with Docker can be frustratingly slow. That's because testing your Python code in Docker is a real pain.&lt;/p&gt;

&lt;p&gt;Luckily, there's a technique you can use to reduce time you spend testing. In this tutorial, we'll show you how to use Docker's host volumes and runserver to make developing Python Docker applications easier and faster.&lt;/p&gt;

&lt;p&gt;(If you're a Node.JS developer, see &lt;a href="https://kelda.io/blog/develop-nodejs-docker-applications-faster/"&gt;How to Develop Your Node.Js Docker Applications Faster&lt;/a&gt;.)&lt;/p&gt;

&lt;h2&gt;
  
  
  How Host Volumes and Runserver Can Speed Up Your Python Development
&lt;/h2&gt;

&lt;p&gt;As every Python developer knows, the best way to develop your application is to iterate through short, quick cycles of coding and testing. But if you're developing using Docker, every time you change your code, you're stuck waiting for the container to rebuild before you can test.&lt;/p&gt;

&lt;p&gt;As a result, you end up with a development workflow that looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You make a change.&lt;/li&gt;
&lt;li&gt;You wait for the container to rebuild.&lt;/li&gt;
&lt;li&gt;You make another change.&lt;/li&gt;
&lt;li&gt;You wait some more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if your team uses CI/CD, so you're constantly running your code through automated tests? You're going to be spending even more time waiting for the container to rebuild.&lt;/p&gt;

&lt;p&gt;Coding and waiting and coding and waiting is not a recipe for developer productivity -- or developer happiness.&lt;/p&gt;

&lt;p&gt;But there's a way to modify a container's code without having to rebuild it. The trick is to use a Docker host volume. &lt;/p&gt;

&lt;p&gt;Host volumes sync file changes between a local host folder and a container folder. If you use a host volume to mount the code you're working on into a container, any edits you make to your code on your laptop will automatically appear in the container.  And as you will see in the next section, you can use the runserver package to automatically restart your application without having to rebuild the container -- a technique known as "live reloading."&lt;/p&gt;

&lt;p&gt;The result: instead of wasting lots of time waiting for your containers to rebuild, your code-test-debug loop is almost instantaneous.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example: Using Host Volumes and Runserver in Python Docker Development
&lt;/h2&gt;

&lt;p&gt;The idea of using a host volume to speed up your Python coding might seem a little daunting, but it's pretty straightforward.&lt;/p&gt;

&lt;p&gt;To demonstrate this, let's use a Python example: &lt;a href="https://github.com/kelda/django-polls"&gt;django-polls&lt;/a&gt;, a basic poll app that’s part of Django’s &lt;a href="https://docs.djangoproject.com/en/3.0/intro/tutorial01/"&gt;introductory tutorial&lt;/a&gt;. To clone the repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$git&lt;/span&gt; clone https://github.com/kelda/django-polls
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The repo assumes you are using Docker Compose. You can also use&lt;br&gt;
&lt;a href="https://kelda.io/blimp"&gt;Blimp&lt;/a&gt;, our Compose alternative that scales to the cloud.&lt;/p&gt;

&lt;p&gt;Here's the docker-compose.yml file for django-polls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sh&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./wait-for-postgres.sh&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;migrate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;shell&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;init-db.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;runserver&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.0.0.0:8000"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8000:8000"&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.:/code"&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres:12"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5432:5432"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_USER=polls&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_PASSWORD=polls&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_DB=polls&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This file tells Docker to boot a container, the Django application, and a Postgres database where the application stores the poll. It also tells Docker to mount a host volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.:/code"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As a result, Docker will mount the ./ directory on your laptop, which contains the code you're developing, into the container at /code.&lt;/p&gt;

&lt;p&gt;Next, you need to set up your Docker container so that whenever you edit your code, Docker automatically restarts your Python application. That way, your application will always use the latest version of your code. &lt;/p&gt;

&lt;p&gt;If you are creating a Django app, the easiest way to do that is to have your .yml file tell Docker to use runserver, Django's development web server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sh&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./wait-for-postgres.sh&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;migrate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;shell&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;init-db.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;runserver&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.0.0.0:8000"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As a result, whenever you modify your code on your laptop, runserver restarts the process without rebuilding the container.&lt;/p&gt;

&lt;p&gt;In short, by using a host volume and runserver, you can set up your Python application's container so it automatically syncs code changes between the container and your laptop. If you didn't do this, you'd have to rebuild the container every single time you made a change to your code. &lt;/p&gt;

&lt;p&gt;Over time, this technique can substantially speed up your Python development. For example, we've heard from users that it's not uncommon for container rebuilds to take 5-30 minutes. With host volumes and runserver, your code sync is almost instantaneous. Imagine what your day would look like if you could save yourself 5-30 minutes every time you modify and test your code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Syncing Your Own Code When Developing a Python Application
&lt;/h2&gt;

&lt;p&gt;Now that you've seen how to use this technique in a sample application, the rest of this tutorial will show you how to enable code syncing in one of your existing Python projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Just like the example above, your Python project should include the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A git repo that contains your code&lt;/li&gt;
&lt;li&gt;A Dockerfile that builds that code into a working container&lt;/li&gt;
&lt;li&gt;A docker-compose.yml file you use to run that container&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to Configure Your Container to Automatically Sync Your Python Code
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1) Locate the folder in your Docker container that has your code&lt;/strong&gt;. The easiest way to figure out where your code is stored in your container is to look at your Dockerfile's &lt;code&gt;COPY&lt;/code&gt;commands. In the django-polls example, you can see from the Dockerfile that the container expects the code to be in /code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3
RUN apt update &amp;amp;&amp;amp; apt install -y netcat &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install --no-cache-dir -r requirements.txt 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2) Find the path to the folder on your laptop that has the same Python code&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) Add a host volume to your docker-compose file.&lt;/strong&gt; Find the container in your docker-compose file that you want to sync code with, and add a &lt;code&gt;volume&lt;/code&gt; instruction underneath that container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/path-to-laptop-folder:/path-to-container-folder"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4) Make sure your Docker Compose file configures your container for live reloading&lt;/strong&gt;.  In the django-poll example, you implemented it by using runserver as your web server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sh&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./wait-for-postgres.sh&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;migrate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;shell&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;init-db.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;manage.py&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;runserver&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.0.0.0:8000"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5) Run Docker Compose or Blimp&lt;/strong&gt;. Now all you need to do is either run docker-compose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Or if you're using &lt;a href="https://kelda.io/blimp"&gt;Blimp&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;blimp up
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As a result, Docker will update the container's code with the code that's on your laptop.&lt;/p&gt;

&lt;p&gt;Now that your container is set up to use a host volume and runserver, whenever you modify the Python code on your laptop, your new code will automatically appear in the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At first, the idea of using host volumes to sync the Python code on your laptop with your container might seem a little weird. But once you get used to this workflow, you'll see how much more efficient it is. With just a few tweaks to your Docker containers' set up, developing your Python Docker app is easier and faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Try the &lt;a href="https://github.com/kelda/django-polls"&gt;Python example&lt;/a&gt; on Blimp&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://kelda.io/blog/develop-nodejs-docker-applications-faster/"&gt;How to Develop Your Node.Js Docker Applications Faster&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read about &lt;a href="https://kelda.io/blog/common-docker-compose-mistakes/#mistake-2-slow-host-volumes"&gt;common mistakes with host&lt;br&gt;
volumes&lt;/a&gt;&lt;br&gt;
that can slow down your application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kelda.io/blimp/"&gt;Check out Blimp&lt;/a&gt;, our team's project to improve developer productivity for Docker Compose.&lt;/p&gt;

</description>
      <category>python</category>
      <category>docker</category>
      <category>microservices</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>5 Common Mistakes When Writing Docker Compose</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Wed, 24 Jun 2020 18:50:12 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/5-common-mistakes-when-writing-docker-compose-1f3</link>
      <guid>https://dev.to/ethanjjackson/5-common-mistakes-when-writing-docker-compose-1f3</guid>
      <description>&lt;p&gt;When building a containerized application, developers need a way to boot containers they're working on to test their code. While there are several ways to do this, Docker Compose is one of the most popular options. It makes it easy to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specify what containers to boot during development&lt;/li&gt;
&lt;li&gt;And setup a fast code-test-debug development loop&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The vision is that someone writes a &lt;code&gt;docker-compose.yml&lt;/code&gt; that specifies everything that's needed in development and commits it to their repo.  Then, every developer simply runs &lt;code&gt;docker-compose up&lt;/code&gt;, which boots all the containers they need to test their code.&lt;/p&gt;

&lt;p&gt;However, it takes a lot of work to get your &lt;code&gt;docker-compose&lt;/code&gt; setup to peak performance. We've seen the best teams booting their development environments in &lt;strong&gt;less than a minute&lt;/strong&gt; and testing each change in &lt;strong&gt;seconds&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Given how much time every developer spends testing their code every day, small improvements can add up to a massive impact on developer productivity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0lkfya38rrykgj3qmos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0lkfya38rrykgj3qmos.png" alt="XKCD: Is It Worth the Time?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Credit &lt;a href="https://xkcd.com/1205/" rel="noopener noreferrer"&gt;XKCD&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake 1: Frequent Container Rebuilds
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;docker build&lt;/code&gt; takes a long time. If you're rebuilding your container every time you want to test a code change, you have a huge opportunity to speed up your development loop.&lt;/p&gt;

&lt;p&gt;The traditional workflow for working on non-containerized applications looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code&lt;/li&gt;
&lt;li&gt;Build&lt;/li&gt;
&lt;li&gt;Run&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This process has been highly optimized over the years, with tricks like incremental builds for compiled languages and hot reloading. It's gotten pretty fast.&lt;/p&gt;

&lt;p&gt;When people first adopt containers, they tend to take their existing workflow and just add a &lt;code&gt;docker build&lt;/code&gt; step. Their workflow ends up like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code&lt;/li&gt;
&lt;li&gt;Build&lt;/li&gt;
&lt;li&gt;Docker Build&lt;/li&gt;
&lt;li&gt;Run&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If not done well, that &lt;code&gt;docker build&lt;/code&gt; step tosses all those optimizations out the window.  Plus, it adds a bunch of additional time-consuming work like reinstalling dependencies with apt-get.  All of this adds up into a much slower test process than we had before Docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Run your code outside of Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One approach is to boot all your dependencies in Docker Compose, but run the code you're actively working on locally. This mimics the workflow for developing non-containerized applications.&lt;/p&gt;

&lt;p&gt;Just expose your dependencies over &lt;code&gt;localhost&lt;/code&gt; and point the service you're working on at the &lt;code&gt;localhost:&amp;lt;port&amp;gt;&lt;/code&gt; addresses.&lt;/p&gt;

&lt;p&gt;However, this is not always practical, particularly if the code you're working on depends on things built into the container image that aren't easy to access from your laptop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Maximize caching to optimize your Dockerfile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you must build Docker images, writing your Dockerfiles so that they maximize caching can turn a 10 minute Docker build into 1 minute.&lt;/p&gt;

&lt;p&gt;A typical pattern for production Dockerfiles is to reduce the number of layers by chaining single commands into one &lt;code&gt;RUN&lt;/code&gt; statement. However, image size doesn't matter in development. In development, you want the most layers possible.&lt;/p&gt;

&lt;p&gt;Your production Dockerfile might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;RUN &lt;span class="se"&gt;\&lt;/span&gt;
    go get &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; go &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; go build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is terrible for development because every time that command is re-run, Docker will re-download all of your dependencies and reinstall them. An incremental build is more efficient.&lt;/p&gt;

&lt;p&gt;Instead, you should have a dedicated Dockerfile specifically for development. Break everything into tiny little steps, and plan your Dockerfile so that the steps based on code that changes frequently come last.&lt;/p&gt;

&lt;p&gt;The stuff that changes least frequently, like pulling dependencies, should go first. This way, you don't have to build the entire project when rebuilding your Dockerfile. You just have to build the tiny last piece you just changed.&lt;/p&gt;

&lt;p&gt;For an example of this, see below the Dockerfile we use for&lt;br&gt;
&lt;a href="https://kelda.io/blimp" rel="noopener noreferrer"&gt;Blimp&lt;/a&gt; development.  It follows the techniques described above to shrink a heavy build process down to a couple of seconds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM golang:1.13-alpine as builder

RUN apk add busybox-static

WORKDIR /go/src/github.com/kelda-inc/blimp

ADD ./go.mod ./go.mod
ADD ./go.sum ./go.sum
ADD ./pkg ./pkg

ARG COMPILE_FLAGS

RUN &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 go &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-ldflags&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMPILE_FLAGS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; ./pkg/...

ADD ./login-proxy ./login-proxy
RUN &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 go &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-ldflags&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMPILE_FLAGS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; ./login-proxy/...

ADD ./registry ./registry
RUN &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 go &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-ldflags&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMPILE_FLAGS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; ./registry/...

ADD ./sandbox ./sandbox
RUN &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 go &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-ldflags&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMPILE_FLAGS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; ./sandbox/...

ADD ./cluster-controller ./cluster-controller
RUN &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 go &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-ldflags&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMPILE_FLAGS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; ./cluster-controller/...

RUN &lt;span class="nb"&gt;mkdir&lt;/span&gt; /gobin
RUN &lt;span class="nb"&gt;cp&lt;/span&gt; /go/bin/cluster-controller /gobin/blimp-cluster-controller
RUN &lt;span class="nb"&gt;cp&lt;/span&gt; /go/bin/syncthing /gobin/blimp-syncthing
RUN &lt;span class="nb"&gt;cp&lt;/span&gt; /go/bin/init /gobin/blimp-init
RUN &lt;span class="nb"&gt;cp&lt;/span&gt; /go/bin/sbctl /gobin/blimp-sbctl
RUN &lt;span class="nb"&gt;cp&lt;/span&gt; /go/bin/registry /gobin/blimp-auth
RUN &lt;span class="nb"&gt;cp&lt;/span&gt; /go/bin/vcp /gobin/blimp-vcp
RUN &lt;span class="nb"&gt;cp&lt;/span&gt; /go/bin/login-proxy /gobin/login-proxy

FROM alpine

COPY &lt;span class="nt"&gt;--from&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;builder /bin/busybox.static /bin/busybox.static
COPY &lt;span class="nt"&gt;--from&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;builder /gobin/&lt;span class="k"&gt;*&lt;/span&gt; /bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One final note: with the recent introduction of &lt;a href="https://docs.docker.com/develop/develop-images/multistage-build/" rel="noopener noreferrer"&gt;multi-stage&lt;br&gt;
builds&lt;/a&gt;, it's now possible to create Dockerfiles that both have good layering and small images sizes.  We won't discuss this in much detail in this post, other than to say that the Dockerfile shown above does just that and as a result is used both for &lt;a href="https://kelda.io/blimp" rel="noopener noreferrer"&gt;Blimp&lt;/a&gt; development as well as production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Use host volumes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In general, the best option is to use a host volume to directly mount your code into the container. This gives you the speed of running your code natively, while still running in the Docker container containing its runtime dependencies.&lt;/p&gt;

&lt;p&gt;Host volumes mirror a directory on your laptop into a running container. When you edit a file in your text editor, the change is automatically synced into the container and then can be immediately executed within the container.&lt;/p&gt;

&lt;p&gt;Most languages have a way to watch your code, and automatically re-run when it changes. For example, &lt;a href="https://www.npmjs.com/package/nodemon" rel="noopener noreferrer"&gt;nodemon&lt;/a&gt; is the&lt;br&gt;
go to for Javascript.  Check out this &lt;a href="https://kelda.io/blog/docker-volumes-for-development/" rel="noopener noreferrer"&gt;post&lt;/a&gt; for a tutorial on how to set this up.&lt;/p&gt;

&lt;p&gt;It takes some work initially, but the result is that you can see the results of your code changes in 1-2 seconds, versus a Docker build which can take minutes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Mistake 2: Slow Host Volumes
&lt;/h2&gt;

&lt;p&gt;If you're already using host volumes, you may have noticed that reading and writing files can be painfully slow on Windows and Mac. This is a known issue for commands that read and write lots of files, such as Node.js and PHP applications with complex dependencies.&lt;/p&gt;

&lt;p&gt;This is because Docker runs in a VM on Windows and Mac.&lt;br&gt;
When you do a host volume mount, it has to go through lots of translation to get the folder running on your laptop into the container, somewhat similar to a network file system.  This adds a great deal of overhead, which isn't present when running Docker natively on Linux.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Relax strong consistency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the key problems is that file-system mounts by default maintain strong consistency.  Consistency is a broad topic on which much ink has been spilt, but in short it means that all of a particular files reader's and writers agree on the order that any file modifications occurred, and thus agree on the contents of that file (eventually, sort of).&lt;/p&gt;

&lt;p&gt;The problem is, enforcing strong consistency is quite expensive, requiring coordination between all of a files writers to guarantee they don't inappropriately clobber each other's changes.&lt;/p&gt;

&lt;p&gt;While strong consistency can be particularly important when, for example, running a database in production.  The good news is that in development, it's not required.   Your code files are going to have a single writer (you), and a single source of truth (your repo).  As a result, conflicts aren't as big a concern as they might be in production.&lt;/p&gt;

&lt;p&gt;For just this reason, Docker implemented the ability to relax consistency guarantees when mounting volumes.   In Docker Compose, you can simply add this &lt;code&gt;cached&lt;/code&gt; keyword to your volume mounts to get a significant performance guarantee.  (Don't do this in production ...)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./app:/usr/src/app/app:cached"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution: Code syncing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another approach is to setup code syncing. Instead of mounting a volume, you can use a tool that notices changes between your laptop and the container and copies files to resolve the differences (similar to rsync).&lt;/p&gt;

&lt;p&gt;The next version of Docker has &lt;a href="https://mutagen.io/" rel="noopener noreferrer"&gt;Mutagen&lt;/a&gt; built in as an alternative to cached mode for volumes. If you're interested, just wait until Docker makes its next release and try that out, but you can also check out the&lt;br&gt;
Mutagen project to use it without waiting.  &lt;a href="https://kelda.io/blimp" rel="noopener noreferrer"&gt;Blimp&lt;/a&gt;, our Docker Compose implementation, achieves something similar using &lt;a href="https://http://syncthing.net/" rel="noopener noreferrer"&gt;Syncthing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Don't mount packages&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With languages like Node, the bulk of file operations tend to be in the packages directory (like &lt;code&gt;node_modules&lt;/code&gt;).  As a result, excluding these directories from your volumes can cause a significant performance boost.&lt;/p&gt;

&lt;p&gt;In the example below, we have a volume mounting our code into a container.  And then &lt;em&gt;overwrite&lt;/em&gt; just the &lt;code&gt;node_modules&lt;/code&gt; directory with its own clean dedicated volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.:/usr/src/app"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/usr/src/app/node_modules"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This additional volume mount tells Docker to use a standard volume for the &lt;code&gt;node_modules&lt;/code&gt; directory so that when &lt;code&gt;npm install&lt;/code&gt; runs it doesn't use the slow host mount.  To make this work, when the container first boots up we do &lt;code&gt;npm install&lt;/code&gt; in the &lt;code&gt;entrypoint&lt;/code&gt; to install our dependencies and populate the &lt;code&gt;node_modules&lt;/code&gt; directory.  Something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sh"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;npm&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;install&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;./node_modules/.bin/nodemon&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;server.js"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full instructions to clone and run the above example can be found &lt;a href="https://kelda.io/blimp/docs/examples/#nodejs" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake 3: Brittle Configuration
&lt;/h2&gt;

&lt;p&gt;Most Docker Compose files evolve organically. We typically see tons of copy and pasted code, which makes it hard to make modifications. A clean Docker Compose file makes it easier to make regular updates as production changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Use env files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Env files separate environment variables from the main Docker Compose configuration. This is helpful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keeping secrets out of the git history&lt;/li&gt;
&lt;li&gt;Making it easy to have slightly different settings per developer. For
example, each developer may have a unique access key. Saving the
configuration in a &lt;code&gt;.env&lt;/code&gt; file means that they don't have to modify the
committed &lt;code&gt;docker-compose.yml&lt;/code&gt; file, and deal with conflicts as the file is
updated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To use env files, just add a &lt;code&gt;.env&lt;/code&gt; file, or set the path explicitly with the &lt;a href="https://docs.docker.com/compose/environment-variables/#the-env_file-configuration-option" rel="noopener noreferrer"&gt;&lt;code&gt;env_file&lt;/code&gt; field&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Use override files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/compose/extends/" rel="noopener noreferrer"&gt;Override files&lt;/a&gt; let you have a base configuration, and then specify the modifications in a different file. This can be really powerful if you use Docker Swarm, and have a production YAML file. You can store your production configuration in &lt;code&gt;docker-compose.yml&lt;/code&gt;, then specify any modifications needed for development, such as using host volumes, in an override file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Use &lt;code&gt;extends&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're using Docker Compose v2, you can use the &lt;code&gt;extends&lt;/code&gt; keyword to import snippets of YAML in multiple places. For example, you might have a definition that all services at your company will have these particular five configuration options in their Docker Compose file in development. You can define that once, and then use the &lt;code&gt;extends&lt;/code&gt; keyword to drop that everywhere it's needed, which gives you some modularity. It's painful that we have to do this in YAML but it's the best we have short of writing a program to generate it.&lt;/p&gt;

&lt;p&gt;Compose v3 removed support for the &lt;code&gt;extends&lt;/code&gt; keyword. However, you can achieve a similar result with &lt;a href="https://support.atlassian.com/bitbucket-cloud/docs/yaml-anchors/" rel="noopener noreferrer"&gt;YAML anchors&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Programmatically generate Compose files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We've worked with some engineering teams using &lt;a href="https://kelda.io/blimp" rel="noopener noreferrer"&gt;Blimp&lt;/a&gt; that have a hundred containers in their development Docker Compose file. If they were to use a single giant Docker Compose file it would require thousands&lt;br&gt;
of lines of unmaintainable YAML.&lt;/p&gt;

&lt;p&gt;As you scale, it's okay to write a script to generate Docker Compose files based on some higher-level specifications. This is common for engineering teams with really large development environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake 4: Flaky Boots
&lt;/h2&gt;

&lt;p&gt;Does &lt;code&gt;docker-compose up&lt;/code&gt; only work half the time? Do you have to run&lt;br&gt;
&lt;code&gt;docker-compose restart&lt;/code&gt; to bring up crashed services?&lt;/p&gt;

&lt;p&gt;Most developers want to write code, not do DevOps work. Debugging a broken development environment is super frustrating.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose up&lt;/code&gt; should just work, every single time.&lt;/p&gt;

&lt;p&gt;Most of the issues here are related to services starting in the wrong order. For example, your web application may rely on a database, and will crash if the database isn't ready when the web application boots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Use &lt;code&gt;depends_on&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;depends_on&lt;/code&gt; lets you control startup order. By default, &lt;code&gt;depends_on&lt;/code&gt; only waits until the dependency is created, and doesn't wait for the dependency to be "healthy". However, Docker Compose v2 supports combining depends_on with&lt;br&gt;
healthchecks.  (Unfortunately, this feature was removed in Docker Compose v3, instead you can manually implement something similar with a script like &lt;a href="https://github.com/vishnubob/wait-for-it" rel="noopener noreferrer"&gt;wait-for-it.sh&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The Docker documentation recommends against approaches like &lt;code&gt;depends_on&lt;/code&gt; and &lt;code&gt;wait-for-it.sh&lt;/code&gt;.  And we agree, in production, requiring a specific boot order for your containers is a sign of a brittle architecture.  However, as an individual developer trying to get your job done, fixing every single container in the entire engineering organization may not be feasible.  So, for development, we think it's OK.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake 5: Poor Resource Management
&lt;/h2&gt;

&lt;p&gt;It can get tricky to make sure that Docker has the resources it needs to run smoothly, without completely overtaking your laptop. There's a couple things you can look into if you feel like your development workflow is sluggish because Docker isn't running at peak capacity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Change Docker Desktop allocations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker Desktop needs a lot of RAM and CPU, particularly on Mac and Windows where it's a VM. The default Docker Desktop configuration tends not to allocate enough RAM and CPU, so we generally recommend tweaking the setting to over-allocate. I tend to allocate about 8GB of RAM and 4 CPUs to Docker when&lt;br&gt;
I'm developing (and I turn Docker Desktop off when not in use to make that workable).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Prune unused resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Frequently people will unintentionally leak resources when using Docker.  It's not uncommon for folks to have hundreds of volumes, old container images, and sometimes running containers if they're not careful.  That's why we recommend&lt;br&gt;
occasionally running &lt;code&gt;docker system prune&lt;/code&gt; which deletes all of the volumes, containers, and networks that aren't currently being used. That can free up a lot of resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Run in the cloud&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, in some cases even with the above tips, it may be impossible to efficiently run all of the containers you need on your laptop.  If that's the case, check out &lt;a href="https://kelda.io/blimp" rel="noopener noreferrer"&gt;Blimp&lt;/a&gt;, an easy way to run Docker Compose files in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  What should you do?
&lt;/h2&gt;

&lt;p&gt;TDLR; To improve the developer experience on Docker Compose, I'd encourage you to&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Minimize container rebuilds.&lt;/li&gt;
&lt;li&gt;Use host volumes.&lt;/li&gt;
&lt;li&gt;Strive for maintainable compose files, just like code.&lt;/li&gt;
&lt;li&gt;Make your boots reliable.&lt;/li&gt;
&lt;li&gt;Manage resources mindfully.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>docker</category>
      <category>microservices</category>
      <category>devops</category>
    </item>
    <item>
      <title>What a mysterious bug taught us about how Docker stores registry credentials</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Tue, 23 Jun 2020 14:44:29 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/what-a-mysterious-bug-taught-us-about-how-docker-stores-registry-credentials-26c9</link>
      <guid>https://dev.to/ethanjjackson/what-a-mysterious-bug-taught-us-about-how-docker-stores-registry-credentials-26c9</guid>
      <description>&lt;p&gt;We recently ran into a mysterious bug that required hours of digging into the&lt;br&gt;
arcane details of Docker's registry credentials store to figure out.  Although&lt;br&gt;
in the end the fix turned out to be easy, we learned a thing or two along the&lt;br&gt;
way about the design of the credentials store and how, if you're not careful,&lt;br&gt;
it can be configured insecurely.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://kelda.io/blimp"&gt;Blimp&lt;/a&gt;, sometimes needs to pull private images from a&lt;br&gt;
Docker registry in order to boot those images in the cloud.  This typically&lt;br&gt;
works fine, but unfortunately, when some users started&lt;br&gt;
Blimp, they were getting the following error message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Get https://1234.dkr.ecr.us-east-1.amazonaws.com/v2/blimp/blimp/manifests/v0.1: no basic auth credentials
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;At first, we were completely baffled by this cryptic message and had no clue it&lt;br&gt;
was related to our handling of credentials.  To understand how we figured it&lt;br&gt;
out, first you need to know a little about how modern Docker credentials are&lt;br&gt;
handled.&lt;/p&gt;
&lt;h2&gt;
  
  
  Docker's External Credentials Store
&lt;/h2&gt;

&lt;p&gt;The recommended way to store your Docker credentials is in an external&lt;br&gt;
credentials store. In your Docker config file, which is usually located at&lt;br&gt;
&lt;code&gt;~/.docker/config.json&lt;/code&gt;, there are two fields you can use to configure how&lt;br&gt;
Docker gets and stores credentials: &lt;code&gt;credsStore&lt;/code&gt; and &lt;code&gt;credHelpers&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;credsStore&lt;/code&gt; tells Docker which helper program to use to interact with the&lt;br&gt;
credentials store. All helper programs have names that begin with&lt;br&gt;
&lt;code&gt;docker-credential-&lt;/code&gt; -- the value of &lt;code&gt;credsStore&lt;/code&gt; is the suffix of the helper&lt;br&gt;
program.&lt;/p&gt;

&lt;p&gt;For example, if you work on a Mac laptop, you might decide to use the Mac OS&lt;br&gt;
keychain. The name of the helper program to use the keychain is&lt;br&gt;
&lt;code&gt;docker-credential-osxkeychain&lt;/code&gt;. So your &lt;code&gt;config.json&lt;/code&gt; would include the&lt;br&gt;
following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"credsStore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"osxkeychain"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you want to see what credentials Docker currently has for you, you can use &lt;code&gt;list&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-credential-osxkeychain list
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The result is a list of pairs of servers and usernames. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"http://quay.io"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"kklin"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"https://index.docker.io"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"kevinklin"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You may also notice &lt;code&gt;credHelpers&lt;/code&gt; in your &lt;code&gt;config.json&lt;/code&gt;. These helpers are similar&lt;br&gt;
to &lt;code&gt;credsStore&lt;/code&gt;, but are used to generate short lived credentials. For example,&lt;br&gt;
if you use &lt;a href="http://gcr.io/"&gt;gcr&lt;/a&gt;, &lt;code&gt;gcloud&lt;/code&gt; installs a &lt;code&gt;credHelper&lt;/code&gt; that uses&lt;br&gt;
your Google login to get tokens. This way, Docker never has your Google&lt;br&gt;
credentials directly -- the &lt;code&gt;docker-credential-gcloud&lt;/code&gt; acts as a middleman&lt;br&gt;
between Docker and your Google credentials.&lt;/p&gt;

&lt;p&gt;Once again, here's the error message our users were getting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Get https://1234.dkr.ecr.us-east-1.amazonaws.com/v2/blimp/blimp/manifests/v0.1: no basic auth credentials
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We were able to run the &lt;code&gt;docker-credential-osxkeychain&lt;/code&gt; &lt;code&gt;list&lt;/code&gt; and &lt;code&gt;get&lt;/code&gt;&lt;br&gt;
commands to see the credentials for &lt;code&gt;1234.dkr.ecr.us-east-1.amazonaws.com&lt;/code&gt;, so&lt;br&gt;
why were we getting an error that there weren't any credentials??&lt;/p&gt;
&lt;h2&gt;
  
  
  In the Beginning: Docker Stores Your Registry Password In Your Config File
&lt;/h2&gt;

&lt;p&gt;It turns out that external credentials stores weren't&lt;br&gt;
&lt;a href="https://github.com/moby/moby/pull/20107"&gt;added&lt;/a&gt; to Docker until version 1.11,&lt;br&gt;
in 2016.  Before 1.11, Docker stored credentials via a config field called&lt;br&gt;
&lt;code&gt;auths&lt;/code&gt;. This field is stored in the same file as the &lt;code&gt;credStore&lt;/code&gt;: &lt;code&gt;~/.docker/config.json&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Whenever you logged into a registry, Docker would set the value of auths to&lt;br&gt;
your password. For&lt;br&gt;
&lt;a href="https://www.projectatomic.io/blog/2016/03/docker-credentials-store/"&gt;example&lt;/a&gt;,&lt;br&gt;
your config file might contain the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"auths"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"https://index.docker.io/v1/"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"auth"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"YW11cmRhY2E6c3VwZXJzZWNyZXRwYXNzd29yZA=="&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="s"&gt;"localhost:5001"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"auth"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"aGVzdHVzZXI6dGVzdHBhc3N3b3Jk"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;What we learned the hard way is that there's a quirk with Docker's &lt;code&gt;login&lt;/code&gt;&lt;br&gt;
command. When you log in using &lt;code&gt;docker login&lt;/code&gt;, Docker adds an entry via the&lt;br&gt;
&lt;code&gt;credsStore&lt;/code&gt; &lt;strong&gt;and&lt;/strong&gt; in &lt;code&gt;auths&lt;/code&gt;, using slightly different server names. Your&lt;br&gt;
credentials are properly stored in the credentials store, but the entry in&lt;br&gt;
&lt;code&gt;auths&lt;/code&gt; doesn't contain the username or password. The result looks something&lt;br&gt;
like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="s"&gt;"auths"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="s"&gt;"https://index.docker.io/v1/"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The problem is that Blimp grabs credentials from both &lt;code&gt;auths&lt;/code&gt; &lt;strong&gt;and&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;credsStore&lt;/code&gt;. So it was passing two copies of the credentials to the Docker&lt;br&gt;
image puller -- one with the correct username and password, and one without the&lt;br&gt;
password at all.&lt;/p&gt;

&lt;p&gt;Unfortunately, Docker preferred the &lt;code&gt;https://&lt;/code&gt; version of the credential, and&lt;br&gt;
attempt to pull the image with the empty credential. Thus, the &lt;code&gt;no basic auth&lt;br&gt;
credentials&lt;/code&gt; error.&lt;/p&gt;

&lt;p&gt;Once we figured out that the problem was that an empty duplicate entry was&lt;br&gt;
getting added to the insecure store, it was easy to &lt;a href="https://github.com/kelda/blimp/blob/master/cli/up/up.go"&gt;fix the&lt;br&gt;
problem&lt;/a&gt;. All we&lt;br&gt;
needed to do was add an &lt;code&gt;if&lt;/code&gt; statement to skip empty credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;addCredentials&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;authConfigs&lt;/span&gt; &lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="n"&gt;clitypes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AuthConfig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cred&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;authConfigs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c"&gt;// Don't add empty config sections.&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Username&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;
                &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Password&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;
                &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Auth&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;
                &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Email&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;
                &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IdentityToken&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;
                &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RegistryToken&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;creds&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AuthConfig&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="n"&gt;Username&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;      &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="n"&gt;Password&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;      &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Password&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="n"&gt;Auth&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;          &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Auth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="n"&gt;Email&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;         &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="n"&gt;ServerAddress&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ServerAddress&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="n"&gt;IdentityToken&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IdentityToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="n"&gt;RegistryToken&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;cred&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RegistryToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  A Potential Docker Credentials Security Risk
&lt;/h2&gt;

&lt;p&gt;In the process of uncovering this bug, we noticed a potential security risk&lt;br&gt;
that you may not be aware of.  As we learned, it's best practice to use an&lt;br&gt;
external store to store your external registry credentials.  However, depending&lt;br&gt;
on how and when you installed Docker it's possible you could still be using the&lt;br&gt;
legacy &lt;code&gt;auths&lt;/code&gt; method.  If you are, your &lt;code&gt;~/.docker/config.json&lt;/code&gt; might look&lt;br&gt;
something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"auths"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"https://index.docker.io/v1/"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"auth"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"YW11cmRhY2E6c3VwZXJzZWNyZXRwYXNzd29yZA=="&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="s"&gt;"localhost:5001"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"auth"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"aGVzdHVzZXI6dGVzdHBhc3N3b3Jk"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This may look reasonable secure, the passwords appear to be a garbled bunch of&lt;br&gt;
gibberish.  Surely those passwords are &lt;em&gt;encrypted&lt;/em&gt;, right?&lt;/p&gt;

&lt;p&gt;Guess again. All Docker did was encode the passwords using base64. And as David&lt;br&gt;
Rieger pointed out on &lt;a href="https://hackernoon.com/getting-rid-of-docker-plain-text-credentials-88309e07640d"&gt;Hacker&lt;br&gt;
Noon&lt;/a&gt;,&lt;br&gt;
base64&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;may look like encryption on first glance, but it's not. Base64 is a scheme&lt;br&gt;
for encoding, not encryption. You can simply copy the base64 string and&lt;br&gt;
convert it to ASCII in a matter of seconds.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That seemingly secure password of &lt;code&gt;aGVzdHVzZXI6dGVzdHBhc3N3b3Jk&lt;/code&gt;? All you need to do to read the password is base64 decode it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo aGVzdHVzZXI6dGVzdHBhc3N3b3Jk| base64 -D
hestuser:testpassword
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  The Moral of Our Story: Double Check Your Docker Credentials' Security
&lt;/h2&gt;

&lt;p&gt;So that's the bad news: if Docker config file isn't properly set up, Docker is&lt;br&gt;
storing your credentials password in plain text.&lt;/p&gt;

&lt;p&gt;The good news is that it's easy to fix the problem.&lt;/p&gt;

&lt;p&gt;All you and your team members need to do is take a quick look at&lt;br&gt;
&lt;code&gt;~/.docker/config.json&lt;/code&gt;. If it contains an &lt;code&gt;auths&lt;/code&gt; password, get rid of it and&lt;br&gt;
switch over to using a credentials store. To do so, just download the&lt;br&gt;
appropriate &lt;code&gt;docker-credential-&lt;/code&gt; helper for your system, and update the&lt;br&gt;
&lt;code&gt;credsHelper&lt;/code&gt; field in &lt;code&gt;~/.docker/config.json&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Hope that helps!&lt;/p&gt;

&lt;p&gt;Read the &lt;a href="https://kelda.io/blog/common-docker-compose-mistakes/"&gt;Top 5 common mistakes when writing Docker Compose&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
    <item>
      <title>How to Develop Your Node.Js Docker Applications Faster</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Wed, 10 Jun 2020 00:35:06 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/how-to-develop-your-node-js-docker-applications-faster-dg</link>
      <guid>https://dev.to/ethanjjackson/how-to-develop-your-node-js-docker-applications-faster-dg</guid>
      <description>&lt;p&gt;Docker has revolutionized how Node.js developers create and deploy applications. But developing a Node.js Docker application can be slow and clunky. The main culprit: the process for testing your code in development.&lt;/p&gt;

&lt;p&gt;In this article, we'll show a tutorial and example on how you can use Docker's host volumes and nodemon to code faster and radically reduce the time you spend testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Host Volumes and Nodemon Can Speed Up Your Node.js Development
&lt;/h2&gt;

&lt;p&gt;One of the irritating things about testing during development with Docker is that whenever you change your code, you have to wait for the container to rebuild. With many Node.js applications, this can chew up a lot of time.&lt;/p&gt;

&lt;p&gt;As a result, you end up with a development workflow that looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You make a change.&lt;/li&gt;
&lt;li&gt;You wait for the container to rebuild.&lt;/li&gt;
&lt;li&gt;You make another change.&lt;/li&gt;
&lt;li&gt;You wait some more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if you have CI/CD and are continually running your code through automated tests? You're going to be spending even more time waiting for the container to rebuild.&lt;/p&gt;

&lt;p&gt;This process gets pretty tedious. And it's hard to stay in the flow.&lt;/p&gt;

&lt;p&gt;But there's a way to change a container's code without having to rebuild it. The trick is to use a Docker host volume. &lt;/p&gt;

&lt;p&gt;Host volumes sync file changes between a local host folder and a container folder. If you use a host volume to mount the code you're working on into a container, any edits you make to your code on your laptop will automatically appear in the container.  And as you will see in the next section, you can use the nodemon package to automatically restart your application without having to rebuild the container -- a technique known as "live reloading."&lt;/p&gt;

&lt;p&gt;The result: instead of having to spend lots of time waiting, your code-test-debug loop is almost instantaneous.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example: Using Host Volumes and Nodemon in Node.Js Docker Development
&lt;/h2&gt;

&lt;p&gt;The idea of using a host volume to speed up your Node.js coding might seem a little intimidating. But it's pretty straightforward to do.&lt;/p&gt;

&lt;p&gt;To demonstrate this, let's use a Node.js example:&lt;br&gt;
&lt;a href="https://github.com/kelda/node-todo"&gt;Node-todo&lt;/a&gt;, a simple to-do application&lt;br&gt;
 created by &lt;a href="https://github.com/scotch-io/node-todo"&gt;scotch.io&lt;/a&gt;. To clone the repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$git&lt;/span&gt; clone https://github.com/kelda/node-todo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The repo assumes you are using Docker Compose. You can also use&lt;br&gt;
&lt;a href="https://kelda.io/blimp"&gt;Blimp&lt;/a&gt;, our alternative to Compose that runs in the cloud.&lt;/p&gt;

&lt;p&gt;Here's Node-todo's &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8080:8080"&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./app:/usr/src/app/app"&lt;/span&gt;
  &lt;span class="na"&gt;mongo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mongo"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;27017:27017"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This file tells Docker to boot a container, the Node.js application, and a MongoDB database where the application stores the to-dos. It also tells Docker to mount a host volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./app:/usr/src/app/app"&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As a result, Docker will mount the ./app directory on your laptop, which contains your code, into the container at /usr/src/app/app.&lt;/p&gt;

&lt;p&gt;Now, all you need to do is ensure that whenever you've edited your code, your Node.js application restarts so it's using your latest code. That's where nodemon comes in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/nodemon"&gt;nodemon&lt;/a&gt; is a Node.js package that automatically restarts an application when it detects file changes in one or more specified directories. Once you've changed your code on your laptop/desktop, nodemon detects that change and restarts the process without rebuilding the container. &lt;/p&gt;

&lt;p&gt;To make this happen, you need to tell Docker to set the entrypoint to nodemon instead of node.js. You do that in the Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:10-alpine&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PORT 8080&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /usr/src/app&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; nodemon
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["nodemon", "/usr/src/app/server.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In short, by using a host volume and nodemon, you can set up your Node.js application's container so it automatically syncs code changes between the container and your laptop. If you didn't do this, you'd have to rebuild the container every single time you made a change to your code. &lt;/p&gt;

&lt;p&gt;Over time, this technique can substantially speed up your Node.js development. For example, we've heard from users that it's not uncommon for container rebuilds to take 5-30 minutes. With host volumes and nodemon, your code sync is almost instantaneous. Imagine what your day would look like if you could save yourself 5-30 minutes every time you change and test your code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Syncing Your Own Code When Developing a Node.js Application
&lt;/h2&gt;

&lt;p&gt;Now that you've seen how it works in a sample application, let's walk through how to enable code syncing in one of your existing Node.js projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Just like the example above, before you get started, we recommend your Node.js project includes the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A git repo that contains your code&lt;/li&gt;
&lt;li&gt;A Dockerfile that builds that code into a working container&lt;/li&gt;
&lt;li&gt;A docker-compose.yml file you use to run that container&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to Configure Your Container to Automatically Sync Your Node.js Code
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1) Locate the folder in your Docker container that has your code&lt;/strong&gt;. The easiest way to figure out where your code is stored in your container is to look at your Dockerfile's &lt;code&gt;COPY&lt;/code&gt;commands. In the Node-todo example, its Dockerfile tells Docker to put the code in . /usr/src/app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY . /usr/src/app
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2) Find the path to the folder on your laptop that has the same Node.js code&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) Add a host volume to your docker-compose file.&lt;/strong&gt; Find the container in your docker-compose file that you want to sync code with, and add a &lt;code&gt;volume&lt;/code&gt; instruction underneath that container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/path-to-laptop-folder:/path-to-container-folder"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4) Switch from using node.js to nodemon&lt;/strong&gt;.  In the Node-todo example, you implemented it via  Dockerfile commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; nodemon
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["nodemon", "/usr/src/app/server.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As a result, Docker will install nodemon with &lt;code&gt;npm install -g nodemon&lt;/code&gt; and change the entrypoint from&lt;br&gt;
&lt;code&gt;node&lt;/code&gt; to  &lt;code&gt;nodemon&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5) Run Docker Compose or Blimp&lt;/strong&gt;. Now all you need to do is either run docker-compose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Or if you're using &lt;a href="https://kelda.io/blimp"&gt;Blimp&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;blimp up
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Docker will overwrite the container's code with the code that's on your laptop.&lt;/p&gt;

&lt;p&gt;Now that you've modified your project so it uses a host volume and nodemon, any changes you make to your Node.js code on your laptop will now automatically appear in the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Using host volumes to link your Node.js code on your laptop with your container can take a little getting used to. But it'll make developing your Node.js Docker apps easier and faster.&lt;/p&gt;

&lt;p&gt;Originally posted on: &lt;a href="https://kelda.io/blog/develop-nodejs-docker-applications-faster/"&gt;https://kelda.io/blog/develop-nodejs-docker-applications-faster/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>docker</category>
      <category>node</category>
      <category>microservices</category>
    </item>
    <item>
      <title>10 DockerCon 2020 Talks for 5/28</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Wed, 27 May 2020 14:37:23 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/10-dockercon-2020-talks-for-5-28-594b</link>
      <guid>https://dev.to/ethanjjackson/10-dockercon-2020-talks-for-5-28-594b</guid>
      <description>&lt;p&gt;&lt;a href="https://docker.events.cube365.net/docker/dockercon"&gt;DockerCon Live 2020&lt;/a&gt; is a free event online on 5/28! We're really excited to learn more about the container ecosystem. &lt;/p&gt;

&lt;p&gt;In addition to Kelda Co-founder Ethan's talk about &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/icm6kmb3P3ZT8vwt5"&gt;Docker Compose in the Cloud with Blimp&lt;/a&gt; (at 2:00 PDT/5:00 EDT), we handpicked 10 talks we're interested in watching because of their practical knowledge!&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/K5D8qKJpX658yY8o9"&gt;Docker Desktop + WSL 2 Integration Deep Dive&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;10:30am-11:00am PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simon Ferquel, Senior Software Developer, Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Have you ever wondered how Docker Desktop on Windows works with WSL 2 to provide a better developer experience? This talk will dive deep into the Docker Desktop and WSL architectures and show how they fit together.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/eWWPtj5dmHAmoYypc"&gt;Best Practices for Compose-managed Python Applications&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;11:00am-11:30am PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anca Lordache, Engineer, Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It can be tricky to get your multi-tier Python application up and running deterministically for development. Managing the versions, dependencies and configuration takes up time that you could be using to code. Containers and Docker Compose solve this and give you a deterministic development environment that's quick to get up and running and easy to move to production.&lt;/p&gt;

&lt;p&gt;This talk will show you some best practices for Python projects with Docker Compose, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to bootstrap your project&lt;/li&gt;
&lt;li&gt;An example development workflow with debugging and automated testing&lt;/li&gt;
&lt;li&gt;How to make your builds reproducible and optimized&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All this should make your Python development experience quicker and better!&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/eQogQXrN4xCbSuzCt"&gt;How To Build and Run Node Apps with Docker and Compose&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;11:00am-11:30am PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kathleen Juell, Developer, Digital Ocean&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers are an essential part of today's microservice ecosystem, as they allow developers and operators to maintain standards of reliability and reproducibility in fast-paced deployment scenarios. And while there are best practices that extend across stacks in containerized environments, there are also things that make each stack distinct, starting with the application image itself.&lt;/p&gt;

&lt;p&gt;This talk will dive into some of these particularities, both at the image and service level, while also covering general best practices for building and running Node applications with database backends using Docker and Compose.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/4YkHYPnoQshkmnc26"&gt;Become a Docker Power User With Microsoft Visual Studio Code&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;11:30am-12:00pm PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Brian Christner, Docker Captain &amp;amp; Co-Founder, 56K.Cloud, host of @thebytepodcast&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this session, we will unlock the full potential of using Microsoft Visual Studio Code (VS Code) and Docker Desktop to turn you into a Docker Power User. When we expand and utilize the VS Code Docker plugin, we can take our projects and Docker skills to the next level. In addition to using VS Code, we streamline our Docker Desktop development workflow with less context switching and built-in shortcuts. You will learn how to bootstrap new projects, quickly write Dockerfiles utilizing templates, build, run, and interact with containers all from VS Code.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/DtqPfXhLoJGgA3HEW"&gt;Tinkertoys, Microservices, and Feature Management: How to Build for the Future&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;12:00pm-12:30pm PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heidi Waterhouse, Principal Developer Advocate, LaunchDarkly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lots of us aren’t developing tidy, discrete features that are easy to manage. How do you plan to move from a tangle of interconnected features to something that you can test and deploy each part of? How do you manage the combinatorial complexity of individual feature testing? Join us for an overview on the conceptual basis of designing for feature management. It sounds simple to say that we will build one feature at a time, give it an API interface and allow it to connect with other features and microservices. The implementation is anything but simple. This talk explores how you can start migrating your existing features and services to a more modular, testable, and resilient system. Since containers are not state-aware, how do you make changes to their presentation without needing to rebuild them entirely? With feature flags, your container can be stable and your presentation dynamic. How can you test a distributed architecture on your laptop? How can you simulate partial outages? This talk is going to touch on some of the best practices that you can use to bring new life to your brown fields.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/92BAM7vob5uQ2spZf"&gt;New Docker Desktop Filesharing Features&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1:00pm-1:30pm PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dave Scott, Technical Staff, Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers need fast edit-compile-test cycles to maximise their productivity. Source code is often edited in an IDE on the Mac or Windows host and shared directly with containers where it can be executed. Since the containers are running in helper VMs, the files must be accessed remotely or copied, which can lead to performance problems, lengthening the edit-compile-test cycle and lowering developer productivity. In this talk I'll describe recent changes to Docker Desktop to make file sharing faster and more reliable. We have a completely new implementation on Windows which replaces CIFS / SMB and on Mac we have an integration of "mutagen" which performs automatic two-way synchronisation of source code and build artefacts. This talk will contain a deep dive into these new features and demonstrate how to use them effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/AG9iBqW3BdXTR9Zfh"&gt;Simplify All the Things with Docker Compose&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1:30pm-2:00pm PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Michael Irwin, Application Architect, Virginia Tech&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you probably know by now, containers have revolutionized the software industry. But, once you have a container, then what? How do you run it? How do you help someone else run it? There are so many flags and options to remember, ports to configure, volume mappings to remember, and don't even get me started with networking containers together! While it's possible to do all of this through the command line, don't do it that way! With Docker Compose, you can create an easily shareable file that makes all of this a piece of cake. And once you fully adopt containers in your dev environment, it lets you setup your code repos to allow the simplest dev onboarding experience imaginable: 1) git clone; 2) docker-compose up; 3) write code. In this talk, we'll talk about several tips to help make all of this a reality. We'll start with a few Docker Compose basics, but then quickly move into several advanced topics. We'll even talk about how to use the same Dockerfile for dev and prod (we've all been there by having two separate files)! As an added bonus, we'll look at how to use Docker Compose in our CI/CD pipelines to perform automated tests of the container images built earlier in the pipeline! We'll have a few slides (because we have to explain a few things), lots of live demos (show it in action!), and maybe a few other surprises as well! Let's have some fun and help simplify all the things with Docker Compose!&lt;/p&gt;

&lt;h3&gt;
  
  
  8. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/gXQAMy3hS8yHMbSKL"&gt;Dev and Test Agility for Your Database With Docker&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;2:00pm-2:30pm PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Julie Lerman, Software Coach, The Data Farm&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agile practices teach us how to deal with evolving applications but so often the data store is overlooked as a component of your application lifecycle. Database servers are monolothic, resource intensive and mostly viewed as set in stone. Encapsulating your database server in a container and your database in a storage container can dramatically lighten the load and make your database as agile as your model and other processes. And you can even use a serious enterprise class database like SQL Server this way. This session will show how to benefit from using a containerized version of SQL Server for Linux during development and testing. We'll also address concerns about data that needs to be persisted. You'll also get a peek at the DevOps side of this, including using images in your CI/CD process.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/f7WF5xt7jRsDJePCG"&gt;How to Use Mirroring and Caching to Optimize your Container Registry&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;3:30pm-4:00pm PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Brandon Mitchell, Docker Captain and DevOps Solutions Architect, BoxBoat&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;How do you make your builds more performant? This talk looks at options to configure caching and mirroring of images that you need to save on bandwidth costs and to keep running even if something goes down upstream.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. &lt;a href="https://docker.events.cube365.net/docker/dockercon/content/Videos/GZpzJAapdrSXohzNz"&gt;Your Container has Vulnerabilities. Now What?&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;3:30pm-4:00pm PDT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jim Armstrong, Product Marketing Director Container Security, Snyk&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers are an essential part of today's microservice ecosystem, as they allow developers and operators to maintain standards of reliability and reproducibility in fast-paced deployment scenarios. And while there are best practices that extend across stacks in containerized environments, there are also things that make each stack distinct, starting with the application image itself.&lt;/p&gt;

&lt;p&gt;This talk will dive into some of these particularities, both at the image and service level, while also covering general best practices for building and running Node applications with database backends using Docker and Compose.&lt;/p&gt;




&lt;p&gt;Originally posted at: &lt;a href="https://kelda.io/blog/dockercon-2020-talks/"&gt;https://kelda.io/blog/dockercon-2020-talks/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>watercooler</category>
      <category>devops</category>
      <category>techtalks</category>
    </item>
    <item>
      <title>Docker Experience</title>
      <dc:creator>Ethan J. Jackson</dc:creator>
      <pubDate>Mon, 04 May 2020 18:57:34 +0000</pubDate>
      <link>https://dev.to/ethanjjackson/docker-experience-g3i</link>
      <guid>https://dev.to/ethanjjackson/docker-experience-g3i</guid>
      <description>&lt;p&gt;I'm working on a project to reduce Docker’s resource usage.&lt;/p&gt;

&lt;p&gt;Please fill out this 5-min survey to help me improve the Docker development experience!&lt;/p&gt;

&lt;p&gt;I'll be raffling a $150 Amazon gift card as thanks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forms.gle/2ggShw236EjBdtxn7"&gt;https://forms.gle/2ggShw236EjBdtxn7&lt;/a&gt;&lt;/p&gt;

</description>
      <category>watercooler</category>
      <category>webdev</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
