<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Seán Kelleher</title>
    <description>The latest articles on DEV Community by Seán Kelleher (@smortimerk).</description>
    <link>https://dev.to/smortimerk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/smortimerk"/>
    <language>en</language>
    <item>
      <title>Live Stream: Writing a Programming Language from Scratch</title>
      <dc:creator>Seán Kelleher</dc:creator>
      <pubDate>Sat, 05 Jun 2021 22:03:57 +0000</pubDate>
      <link>https://dev.to/smortimerk/live-stream-writing-a-programming-language-from-scratch-56jo</link>
      <guid>https://dev.to/smortimerk/live-stream-writing-a-programming-language-from-scratch-56jo</guid>
      <description>&lt;p&gt;At 17:00 GMT tomorrow (June 6) I'll be be streaming a coding session from &lt;a href="https://www.twitch.tv/ezanmoto"&gt;Twitch&lt;/a&gt;, where I'll be implementing an interpreter for a custom programming language. This may be interesting to those who want to learn the basics of how programming languages are implemented, or for those who already know but haven't yet implemented one themselves. I'll also be providing commentary and answering any questions that may come up during the session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edit:&lt;/strong&gt; The stream is now available at &lt;a href="https://www.twitch.tv/videos/1047868433"&gt;https://www.twitch.tv/videos/1047868433&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Target Language
&lt;/h2&gt;

&lt;p&gt;The language I'll be developing will be a custom C-style interpreted language. I intend to use this custom language as a skeleton for playing around with different syntaxes/styles for a further programming language project that I plan to work on.&lt;/p&gt;

&lt;p&gt;The language itself will be a fairly typical C-like language consisting of a number of common features from modern languages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic types: Booleans, integers and strings&lt;/li&gt;
&lt;li&gt;Complex types: Lists, objects (maps) and functions/methods&lt;/li&gt;
&lt;li&gt;Python-like strong, runtime typing, with optional typing in future&lt;/li&gt;
&lt;li&gt;Python-like string/list access&lt;/li&gt;
&lt;li&gt;Python-like built-in functions like &lt;code&gt;len&lt;/code&gt;, &lt;code&gt;keys&lt;/code&gt;, etc.&lt;/li&gt;
&lt;li&gt;Rust-like control flow syntax (e.g. &lt;code&gt;if true { ... }&lt;/code&gt; and &lt;code&gt;for x in xs { ... }&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Rust/JS-like array/object composition (spreading) and decomposition&lt;/li&gt;
&lt;li&gt;Go-like error handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are some example snippets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x = 1
if x &amp;lt; 2 {
    print('T')
} else {
    print('F')
}

# Output:
#
# T
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;i = 0;
while i &amp;lt; 3 {
    print(i);
    i += 1;
}

# Output:
#
# 0
# 1
# 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;xs = ['b', 'c'];
ys = [xs.., 'a', xs..];
[_, _, ..zs] = ys
for z in zs {
    print(z);
}

# Output:
#
# a
# b
# c
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn half(n) {
    return n / 2
}

fn map(f, xs) {
    for i in 0..len(xs) {
        xs[i] = f(xs[i])
    }
}

print(map(half, [4, 5, 6]))

# Output:
#
# [2, 2, 3]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Source Language
&lt;/h2&gt;

&lt;p&gt;I'll be writing the interpreter in Rust using LALRPOP. The data structuring and pattern matching capabilities of Rust make it a perfect fit for building and walking ASTs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finally
&lt;/h2&gt;

&lt;p&gt;Please feel free to join me and ask questions. Also, please let me know in the comments here if there's anything that you'd like to see and I'll do my best to cover it.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>rust</category>
      <category>interpreter</category>
    </item>
    <item>
      <title>Docker `with_build_env.sh` Script</title>
      <dc:creator>Seán Kelleher</dc:creator>
      <pubDate>Sat, 29 May 2021 15:01:02 +0000</pubDate>
      <link>https://dev.to/smortimerk/docker-withbuildenv-sh-script-4o9a</link>
      <guid>https://dev.to/smortimerk/docker-withbuildenv-sh-script-4o9a</guid>
      <description>&lt;p&gt;In &lt;a href="https://seankelleher.ie/posts/docker_for_building"&gt;Docker for the Build Process&lt;/a&gt;, I introduced the idea of extracting the build phase of a project into a dedicated &lt;code&gt;build_img.sh&lt;/code&gt; script, which uses the project's Dockerised build environment to build the "run" image for the project, allowing a project to be built on a host environment without installing any extra tooling other than Docker. In this post I'll expand on this idea and show how we can use this script as a basis for a new, reusable script that allows us to easily work with the build environment of our project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Usage
&lt;/h2&gt;

&lt;p&gt;The idea here is to create a &lt;code&gt;with_build_env.sh&lt;/code&gt; script which, at its most basic, will take a command and run it in the build environment, with the local project directory also mounted in this build environment. This means that we could, for example, run &lt;code&gt;bash scripts/with_build_env.sh make&lt;/code&gt; without even installing &lt;code&gt;make&lt;/code&gt; locally, but have all the artefacts output to our local project directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Headless usage
&lt;/h2&gt;

&lt;p&gt;The first way this script can be run is the "headless" way, which is the approach that will be primarily used in the build pipeline. This runs a command in the build environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash scripts/with_build_env.sh make posts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More complicated commands involving Bash operators can also be performed using the likes of &lt;code&gt;bash -c&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash scripts/with_build_env.sh bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'make check &amp;amp;&amp;amp; make posts'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Interactive usage
&lt;/h2&gt;

&lt;p&gt;The second way this script can be run is the "interactive" way, which will only be used locally in general. This typically involves running an interactive shell in the build environment. This will allow you to run build environment tools on your project, even if they're not installed on your local environment.&lt;/p&gt;

&lt;p&gt;This approach will usually be performed using &lt;code&gt;sh&lt;/code&gt;/&lt;code&gt;bash&lt;/code&gt; as the command, and using a flag to indicate that the current command should be run interactively:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash scripts/with_build_env.sh &lt;span class="nt"&gt;--dev&lt;/span&gt; bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This launches a Bash process in the build environment. This allows you to work within your local directory but use the tools from your build environment.&lt;/p&gt;

&lt;p&gt;The reason for using a flag to distinguish interactive use from headless use is that build pipelines generally don't provide a certain mechanism (a TTY) for interactive use, so attempting to run a command in interactive mode in the build pipeline will fail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Implementation
&lt;/h2&gt;

&lt;p&gt;The following is a &lt;code&gt;basic with_build_env.sh&lt;/code&gt; script based on the ideas presented in &lt;a href="https://seankelleher.ie/posts/docker_for_building"&gt;Docker for the Build Process&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;org&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ezanmoto'&lt;/span&gt;
&lt;span class="nv"&gt;proj&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'hello'&lt;/span&gt;
&lt;span class="nv"&gt;build_img&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$org&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$proj&lt;/span&gt;&lt;span class="s2"&gt;.build"&lt;/span&gt;

bash scripts/docker_rbuild.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$build_img&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"latest"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'build.Dockerfile'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="nv"&gt;workdir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/go/src/github.com/ezanmoto/hello'&lt;/span&gt;

docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"type=bind,src=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;,dst=&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--workdir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$build_img&lt;/span&gt;&lt;span class="s2"&gt;:latest"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This just performs two actions: it rebuilds the image for the build environment, if necessary, and then runs the provided command in the build environment. Rebuilding the image isn't necessary if you're using a remote image, but this step is useful for keeping your image up-to-date if your project defines its own build environment, as it's common for the requirements of projects to grow beyond base images quite quickly.&lt;/p&gt;

&lt;p&gt;Using this script to build the current project can then be as straightforward as &lt;code&gt;bash scripts/with_build_env.sh make&lt;/code&gt;. However, there are a number of drawbacks to the script as it currently is, such as the fact that the build runs as &lt;code&gt;root&lt;/code&gt; and dependencies aren't cached. The following sections show optional, incremental steps that can be used to improve this basic implementation.&lt;/p&gt;

&lt;p&gt;One additional note about &lt;code&gt;with_build_env.sh&lt;/code&gt; is that, while the general idea and approach of each instance of the script is the same, each specific instance of the script may vary. This is because the utility of this script will generally change from project to project and language to language. For example, when a &lt;code&gt;with_build_env.sh&lt;/code&gt; script wants to keep the build cache between Docker runs, the specifics of where the cache lives is handled directly by the &lt;code&gt;with_build_env.sh&lt;/code&gt; script, and this changes depending on the language and build tooling being used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layers
&lt;/h2&gt;

&lt;p&gt;The basic &lt;code&gt;with_build_env.sh&lt;/code&gt; script presented above gives a lot of the benefits touted in &lt;a href="https://seankelleher.ie/posts/docker_for_building"&gt;Docker for the Build Process&lt;/a&gt; right out of the box. However, you are likely to encounter different issues depending on exactly how you'll be using the script. For example, the first issue that is likely to be encountered when working with this script is to try and run it in a build pipeline, which will probably result in Docker failing with an error output of &lt;code&gt;the input device is not a TTY&lt;/code&gt;. Another problem is that the issued command is run by &lt;code&gt;root&lt;/code&gt; by default which, while not necessarily a problem in and of itself, can cause a little friction when files created in the build environment are owned by &lt;code&gt;root&lt;/code&gt; in the host directory. This section outlines the most common and problematic issues that may be encountered, along with possible resolutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interactive use
&lt;/h2&gt;

&lt;p&gt;The first issue that is often encountered when working with the &lt;code&gt;with_build_env.sh&lt;/code&gt; script is the fact that Docker will need the &lt;code&gt;--interactive&lt;/code&gt; and &lt;code&gt;--tty&lt;/code&gt; flags when running locally, but will need them to be removed when running in the build pipeline. For this I generally introduce some basic argument parsing to the script to allow it to either be run in "dev" (interactive) mode, or not:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gi"&gt;+docker_flags=''
+case "$1" in
+    --dev)
+        docker_flags="$docker_flags --interactive --tty --publish=$2"
+        shift 2
+        ;;
+esac
+
&lt;/span&gt; org='ezanmoto'
 proj='hello'
 build_img="$org/$proj.build"

 bash scripts/docker_rbuild.sh \
     "$build_img" \
     "latest" \
     --file='build.Dockerfile' \
     .

 workdir='/go/src/github.com/ezanmoto/hello'

 docker run \
&lt;span class="gi"&gt;+    $docker_flags \
&lt;/span&gt;     --rm \
     --mount="type=bind,src=$(pwd),dst=$workdir" \
     --workdir="$workdir" \
     "$build_img:latest" \
     "$@"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Passing &lt;code&gt;--interactive&lt;/code&gt; and &lt;code&gt;--tty&lt;/code&gt; is sufficient for enabling this functionality, but I have a convention of also having the &lt;code&gt;--dev&lt;/code&gt; option take a port forwarding argument, which is used to expose a port from the container. This is because I often find myself making use of ports to achieve some level of communication between the host and the container, such as running the main service I'm working on in the build environment and accessing it from the host. This addition does mean that the command for launching an interactive Bash shell has to be modified slightly, but it also means that we avoid having to restart the session in order to expose the container to the host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash scripts/with_build_env.sh &lt;span class="nt"&gt;--dev&lt;/span&gt; 3000:3000 bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  User mapping
&lt;/h2&gt;

&lt;p&gt;This isn't as much of an issue in the build pipeline, but when using the &lt;code&gt;with_build_env.sh&lt;/code&gt; script as presented above, one issue is that the default user in the build environment is &lt;code&gt;root&lt;/code&gt;. This is fine in a functional sense - you'll generally be able to build the project without issue and won't run into ownership problems. However, it quickly becomes very tedious from a usability perspective - any files that are created in the container are owned by &lt;code&gt;root&lt;/code&gt;, requiring &lt;code&gt;sudo&lt;/code&gt; to remove them locally, and accidentally performing &lt;code&gt;git&lt;/code&gt; operations can result in pain down the line as some of the files in your &lt;code&gt;.git&lt;/code&gt; directory can have their ownership altered.&lt;/p&gt;

&lt;p&gt;As a more usable solution I usually pass &lt;code&gt;--user="$(id --user):$(id --group)"&lt;/code&gt; to the docker run command. This means that we're now running as our host user when using the build environment, so any files we create will have the desired ownership:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt; docker run \
     $docker_flags \
     --rm \
&lt;span class="gi"&gt;+    --user="$(id --user):$(id --group)" \
&lt;/span&gt;     --mount="type=bind,src=$(pwd),dst=$workdir" \
     --workdir="$workdir" \
     "$build_img:latest" \
     "$@"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  User mapping caveats
&lt;/h3&gt;

&lt;p&gt;One issue with mapping the user as presented is that, while we're using the correct user and group IDs in the container for local development, this user doesn't actually exist within the build environment. This means that any tools that rely on the user's &lt;code&gt;$HOME&lt;/code&gt;, including many Git and SSH-based commands, simply won't work. Such commands will either need to be run outside the build environment (such as &lt;code&gt;git commit&lt;/code&gt;), or else the build environment will need to be set up with a functional user to carry out specific commands with &lt;code&gt;sudo&lt;/code&gt;. It's not always necessary for the user to exist in the container, but if it is then &lt;a href="https://jtreminio.com/blog/running-docker-containers-as-current-host-user/#make-it-dynamic"&gt;that user can be created in the build environment with the desired user ID and group ID&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching
&lt;/h2&gt;

&lt;p&gt;A convention of &lt;code&gt;with_build_env.sh&lt;/code&gt; is to &lt;code&gt;--rm&lt;/code&gt; containers after each use, which is useful to avoid accidentally depending on ephemeral aspects of the build environment. However, this means that any project dependencies stored outside the project directory must be re-downloaded with each launch of the build environment.&lt;/p&gt;

&lt;p&gt;The solution to this is to cache the downloaded files. This is a big area where the script will change based on the programming language and tooling being used.&lt;/p&gt;

&lt;p&gt;The first step is to create an area to persist the cached directories. For this I create named volumes with open permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;tmp_cache&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$org&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;$proj&lt;/span&gt;&lt;span class="s2"&gt;.tmp_cache"&lt;/span&gt;
&lt;span class="nv"&gt;pkg_cache&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$org&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;$proj&lt;/span&gt;&lt;span class="s2"&gt;.pkg_cache"&lt;/span&gt;
&lt;span class="nv"&gt;tmp_cache_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/tmp/cache'&lt;/span&gt;
&lt;span class="nv"&gt;pkg_cache_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/go/pkg'&lt;/span&gt;

docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"type=volume,src=&lt;/span&gt;&lt;span class="nv"&gt;$tmp_cache&lt;/span&gt;&lt;span class="s2"&gt;,dst=&lt;/span&gt;&lt;span class="nv"&gt;$tmp_cache_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"type=volume,src=&lt;/span&gt;&lt;span class="nv"&gt;$pkg_cache&lt;/span&gt;&lt;span class="s2"&gt;,dst=&lt;/span&gt;&lt;span class="nv"&gt;$pkg_cache_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$build_img&lt;/span&gt;&lt;span class="s2"&gt;:latest"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;chmod&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        0777 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$tmp_cache_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$pkg_cache_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The specific image being used here isn't important (it just needs to have &lt;code&gt;chmod&lt;/code&gt; present) but we use the build environment for this for simplicity. We give the directory open permissions because volumes are owned by &lt;code&gt;root&lt;/code&gt; when created by docker, and we want to allow any user to be able to download dependencies and run builds in the build environment.&lt;/p&gt;

&lt;p&gt;It's also useful to prefix the name of the volume with the name of the project (&lt;code&gt;ezanmoto.hello.&lt;/code&gt;, in this example) to help isolate builds across project boundaries. See the "Caveats" section, below, for more details.&lt;/p&gt;

&lt;p&gt;The last piece of the puzzle then is to mount the volume when using the build environment, and to let the build tools that we're using know about this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt; docker run \
     $docker_flags \
     --rm \
&lt;span class="gi"&gt;+    --env=XDG_CACHE_HOME="$tmp_cache_dir" \
+    --mount="type=volume,src=$tmp_cache,dst=$tmp_cache_dir" \
+    --mount="type=volume,src=$pkg_cache,dst=$pkg_cache_dir" \
&lt;/span&gt;     --user="$(id --user):$(id --group)" \
     --mount="type=bind,src=$(pwd),dst=$workdir" \
     --workdir="$workdir" \
     "$build_img:latest" \
     "$@"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the Go build tools are told about the cache directory through the use of the &lt;code&gt;XDG_CACHE_HOME&lt;/code&gt; environment variable.&lt;/p&gt;

&lt;p&gt;This setup allows the persistence of project dependencies across &lt;code&gt;docker run&lt;/code&gt;s without baking them into the image or storing them locally. The cache can also be cleared by running &lt;code&gt;docker volume rm ezanmoto.hello.tmp_cache ezanmoto.hello.pkg_cache&lt;/code&gt;, and the cache area will be automatically remade the next time &lt;code&gt;with_build_env.sh&lt;/code&gt; runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rust
&lt;/h3&gt;

&lt;p&gt;The following shows how a similar caching mechanism could be implemented for Rust. This snippet is taken from another project of mine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'type=volume,src=dpnd_cargo_cache,dst=/cargo'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$build_img&lt;/span&gt;&lt;span class="s2"&gt;:latest"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;chmod &lt;/span&gt;0777 /cargo

docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--interactive&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tty&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'type=volume,src=dpnd_cargo_cache,dst=/cargo'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'CARGO_HOME=/cargo'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"type=bind,src=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;,dst=/app"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--workdir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/app'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$build_img&lt;/span&gt;&lt;span class="s2"&gt;:latest"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It can be seen that Rust uses the &lt;code&gt;CARGO_HOME&lt;/code&gt; environment variable to locate the cache.&lt;/p&gt;

&lt;h3&gt;
  
  
  Caveats of cache volumes
&lt;/h3&gt;

&lt;p&gt;It should generally be fine to use the same cache across projects, but some tools can encounter difficulties when the same caching volume is shared between projects (I've experienced this when using Go in particular, where I've encountered issues with the checksum database).&lt;/p&gt;

&lt;p&gt;The simplest solution in this scenario is to prefixing the volume name with the name of the project in order to isolate volumes per project. However, do note that this issue can still arise even in a single project - for example, when changing between branches with different lockfiles. In this scenario it's helpful to remove the cache volume and allow &lt;code&gt;with_build_env.sh&lt;/code&gt; to recreate it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume &lt;span class="nb"&gt;rm &lt;/span&gt;ezanmoto.hello.tmp_cache ezanmoto.hello.pkg_cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;p&gt;An immediate benefit of &lt;code&gt;with_build_env.sh&lt;/code&gt; is that, assuming direct Docker access is available to us in the build pipeline, we can use the same commands locally those used in the build pipeline. This makes our pipeline easier to set up, but also means that we can run our local code in the build environment before we commit to help ensure that code that builds locally will also build in the build pipeline.&lt;/p&gt;

&lt;p&gt;Some other benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We have a simpler mechanism to open a shell in the build environment, which can be used for building, testing and debugging.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can more easily work with a project without having to manually install its dependencies and required tools, which helps us avoid issues when trying to run the project for the first time. This is particularly helpful for onboarding new developers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Many of the benefits outlined in &lt;a href="https://seankelleher.ie/posts/docker_for_building"&gt;Docker for the Build Process&lt;/a&gt; also apply. In particular, it now becomes much easier to start the interactive build environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;with_build_env.sh&lt;/code&gt; script allows us to abstract the process of running commands inside the build environment, making it easier for us to build, run and debug code in the replicated build environment. This enables greater parity with the build pipeline, and further helps to avoid the classic "it works on my machine" issue.&lt;/p&gt;




&lt;p&gt;This article was originally &lt;a href="https://seankelleher.ie/posts/with_build_env/"&gt;published&lt;/a&gt; on &lt;a href="https://seankelleher.ie/"&gt;seankelleher.ie&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>script</category>
      <category>bash</category>
    </item>
    <item>
      <title>Docker for the Build Process</title>
      <dc:creator>Seán Kelleher</dc:creator>
      <pubDate>Sun, 25 Apr 2021 12:22:51 +0000</pubDate>
      <link>https://dev.to/smortimerk/docker-for-the-build-process-5ghb</link>
      <guid>https://dev.to/smortimerk/docker-for-the-build-process-5ghb</guid>
      <description>&lt;p&gt;This article covers the use of Docker as part of the build process, for both local development and continuous integration builds. In particular, it addresses the practice of building projects using &lt;code&gt;docker build&lt;/code&gt;, and the alternative approach of using &lt;code&gt;docker run&lt;/code&gt; to perform builds.&lt;/p&gt;

&lt;p&gt;Many articles already espouse the benefits of using Docker for local builds, as part of the "dev loop", but to recap a few:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Running your local builds in Docker gives you better parity with the central build pipeline when the latter is also using Docker.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your local builds can run in a minimal environment, reducing dependencies and the risk of depending on tools and resources that aren't available in the central build pipeline. This also helps with the issue of using mismatched versions of tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It's possible to work on a project without having any of the tools/languages/frameworks installed locally. This is particularly useful when working on projects for a short term, or for working on different projects that depend on conflicting versions of the same tools.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building in Docker Images
&lt;/h2&gt;

&lt;p&gt;When using Docker as part of the build process, some projects build the project artefacts as part of an image build. A typical &lt;code&gt;Dockerfile&lt;/code&gt; to define a service may look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; golang:1.14.3-stretch&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;    apt-get update &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;        fortune-mod &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/games/fortune /bin/fortune

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; GO111MODULE=on&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; go.mod go.sum /go/src/github.com/ezanmoto/hello/&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /go/src/github.com/ezanmoto/hello&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;go mod download

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /go/src/github.com/ezanmoto/hello&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;go build ./...

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["./hello"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the most obvious problems with this approach is that this image defines both the build environment and the run environment. This leads to a number of issues such as increased image size, an increased number of attack vectors (because of all of the extra packages required for the build), and the more subtle issue of mixing contexts (for example, is a particular dependency required for the build time or the run time?). Thankfully, Docker added multi-stage Docker builds, which allows us to separate the definition of the build image and the run image, and allows us to make the run image as minimal as possible.&lt;/p&gt;

&lt;p&gt;However, another problem exists in the form of the &lt;code&gt;COPY&lt;/code&gt; commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;COPY . /go/src/github.com/ezanmoto/hello&lt;/code&gt; means that a new image needs to be built every time we want to use Docker to test a code change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;COPY . /go/src/github.com/ezanmoto/hello&lt;/code&gt; actually results in the entire codebase being copied every time that Docker is used to build the project (assuming something in the codebase has changed). This is more of an issue in bigger projects, where it can take a few seconds to copy all files, adding delays to the development loop.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker build&lt;/code&gt; has to be run a lot in this setup for debugging purposes; every time you change the &lt;code&gt;Dockerfile&lt;/code&gt; to try something different you'll need to rebuild the image, further compounding any delays encountered by re-downloading project dependencies and copying the codebase into the image. It's not very interactive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;COPY go.mod go.sum /go/src/github.com/ezanmoto/hello/&lt;/code&gt; can be a pain-point when working with Docker images. With dependency managers such as &lt;code&gt;npm install&lt;/code&gt; and &lt;code&gt;go get&lt;/code&gt; that can handle automatically fetching packages, it's useful to be able to quickly try out different combinations. However, because the &lt;code&gt;COPY&lt;/code&gt; step will cause all packages to be re-downloaded instead of just the updates, this adds friction, and a resistance to using Docker in the development loop, when it comes to testing and updating different dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building in Docker Containers
&lt;/h2&gt;

&lt;p&gt;A straightforward alternative is to build artefacts in containers instead of in images, utilising bind-mounts instead of COPY. For example, instead of the image-based build in the previous section, the following can be used:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;build.Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; golang:1.14.3-stretch&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;    apt-get update &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;        fortune-mod &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/games/fortune /bin/fortune

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; GO111MODULE=on&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /go/src/github.com/ezanmoto/hello&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;build_img.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;proj&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ezanmoto/hello'&lt;/span&gt;
&lt;span class="nv"&gt;build_img&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$proj&lt;/span&gt;&lt;span class="s2"&gt;.build"&lt;/span&gt;
&lt;span class="nv"&gt;run_img&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$proj&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

bash scripts/docker_rbuild.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$build_img&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"latest"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'build.Dockerfile'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; target
docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"type=bind,src=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;,dst=/go/src/github.com/ezanmoto/hello"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$build_img&lt;/span&gt;&lt;span class="s2"&gt;:latest"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'
        set -o errexit

        go mod download
        CGO_ENABLED=0 go build -o target ./...
    '&lt;/span&gt;

bash scripts/docker_rbuild.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$run_img&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'latest'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; alpine:3.11.3&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apk add fortune

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; target/hello /bin&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["/bin/hello"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a notable increase in the amount of code that's present here, but there are also some immediate benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fewer image builds: The build image only needs to be built once, until its actual definition changes, as opposed to being built every time there's a code change. The run image only needs to be built in the CI environment unless it's being tested locally. When debugging the run image, rebuilding the image multiple times has less friction because changing files in the project won't break the cache.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No redundant copying: The build is being run using the host directory so the delay incurred from copying the build context over and over is avoided, without needing to play around with &lt;code&gt;.dockerignore&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Project dependencies that are located within the project directory (e.g. &lt;code&gt;.node_modules&lt;/code&gt;) can be kept from previous runs, even if the dependency/lock files change, and even if the definition of the build environment changes. Volumes can be used to cache dependencies that exist outside the project directory.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are some other small benefits that I like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Succinct definition of the build environment: This makes it easier for developers that prefer to develop locally to actually mirror the exact build environment that'll be used in the build pipeline, by following the steps outlined in &lt;code&gt;build.Dockerfile&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The definition of the build environment, run environment and build instructions are all cleanly separated.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, the biggest benefit that I get from this approach lies in the fact that I can use the build environment interactively, as outlined in the following section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interactive Build Environment
&lt;/h2&gt;

&lt;p&gt;Now that the definition of the build environment has been separated from the build instructions and the definition of the run environment, it's possible to work interactively within the build environment with the local project directory mounted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--interactive&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tty&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;:/go/src/github.com/ezanmoto/hello"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'ezanmoto/hello'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts a Bash session "in" the build environment, but with the project directory bind-mounted within the container. This means that any changes made inside the container are reflected locally, and vice-versa. This setup has numerous advantages when used as part of the development loop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Assuming that the build pipeline uses the same build image, there is now almost total parity between the development environment and the build pipeline environment, meaning that there's less variability after a developer pushes changes. A developer can easily build locally, without creating new Docker images, in almost the same conditions as the build will occur in the build pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A subtle benefit of the previous point is that developers will be using the same version of dependencies as the build pipeline. For example, a new team member may use Go 1.14 locally but the build pipeline might still be on Go 1.12 for various reasons. New Go features will work locally but will break the build. Being able to run with the same versions of tools locally means that there is less chance of this occurring in practice.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Again following on from the previous point, updates are effortless, and safe across projects. For example, the image for the build pipeline may get updated from Go 1.12 to 1.15. It can often be a daunting task to update local installs, especially if there isn't a simple mechanism for removing old versions. With a specially-defined build image, local software doesn't need to be updated, but developers can instead simply work in the new build environment without installing the build tools locally. This also means that issues won't arise when a developer is working on two projects at the same time that require different versions of the same dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developers don't have to install programs locally at all! As a somewhat extreme example, even though I work primarily in Go and Rust, I don't have them installed on my host. Instead, they're installed in my build images that I work in interactively. This also means that environments can be cleaned effortlessly after finishing a project - removing the build images for that project removes all the programming languages and tools that were being used for that project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setting up the development environment for a new developer is now handled automatically by the &lt;code&gt;docker build&lt;/code&gt; process. Developers that want to replicate the setup locally can follow the instructions defined in &lt;code&gt;build.Dockerfile&lt;/code&gt; manually.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Outside of aspects like bound directories, the build environment is decoupled from the host. This means that there's less risk of accidentally depending on things that are present in the host environment that aren't going to be present in the build pipeline. A simple example of this could be depending on Linux tools that are native to the Ubuntu development host when Debian is being used as the build environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disadvantages
&lt;/h2&gt;

&lt;p&gt;While the approach outlined above is my personal approach and preference for managing Docker images in a project, there are some notable caveats to it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The presented approach depends on the use of bind-mounting volumes for the build environment. This can work well in practice when using Linux images on Linux hosts, but may be less practical on other platforms. Furthermore, build environments that nest Docker containers may encounter extra complexity with working with bind-mounted volumes, as paths are referenced relative to the host's filesystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When working interactively in the build environment you may quickly realise that users that you have defined locally don't exist in the build environment. Furthermore, trying to map users between these environments isn't the most straightforward - for example, do you do it at build time or at run time? This is perhaps one of the trickier aspects of running builds in containers instead of images, and could be a big argument in favour of the image approach. This is because any required users are usually better-defined in the image approach, typically being set up using the likes of &lt;code&gt;adduser&lt;/code&gt;/&lt;code&gt;useradd&lt;/code&gt; at the start of the image definition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some people may consider the bind-mount approach to be less "pure" because the container is exposed to the host, and may worry about the reproducibility of the setup, since reproducibility is one of the main benefits of Docker. However, the approach is no less reproducible than the approach that performs &lt;code&gt;COPY . /src&lt;/code&gt;, as the entire host context is copied into the image. With both approaches, it is the responsibility of the build pipeline to ensure that the environment is clean and set up for reproducible results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As mentioned in the previous section, this approach achieves almost total parity between the development environment and the build pipeline environment, but that doesn't mean that subtle differences can't emerge between the two.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, I encountered an issue with this setup when using Samson for CI/CD. By default, Samson doesn't actually do a full clone of a repository when running a new build/deployment, but instead creates a new Git worktree using symbolic links. This meant that the Git repository mounted in the build environment was referencing a file on the host, which couldn't be resolved in the build environment. I wasn't using worktrees locally, so this issue wasn't occurring in my local environment.&lt;/p&gt;

&lt;p&gt;The resolution was straightforward, but less than ideal: to force a full checkout for projects that needed it. Still, it highlights the fact that differences between the local environment and the build pipeline can still manifest with this approach, and special attention should be made when working with symbolic links in particular.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building code in Docker containers using &lt;code&gt;docker run&lt;/code&gt; is generally faster, more space-conserving and more amenable to debugging than building code as part of a &lt;code&gt;docker build&lt;/code&gt;. Separating the build environment definition from build instructions also allows for greater parity between the development environment and the build pipeline, and allows for easier management of project dependencies.&lt;/p&gt;




&lt;p&gt;This article was originally &lt;a href="https://seankelleher.ie/posts/docker_for_building/"&gt;published&lt;/a&gt; on &lt;a href="https://seankelleher.ie/"&gt;seankelleher.ie&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>script</category>
    </item>
    <item>
      <title>Docker Build `--replace`</title>
      <dc:creator>Seán Kelleher</dc:creator>
      <pubDate>Sat, 17 Apr 2021 13:32:24 +0000</pubDate>
      <link>https://dev.to/smortimerk/docker-build-replace-57gh</link>
      <guid>https://dev.to/smortimerk/docker-build-replace-57gh</guid>
      <description>&lt;p&gt;This article covers "docker build replace", a script that I use in projects that contain Dockerfiles, which aims to help overcome some of the main drawbacks I encounter when using Dockerfiles in a project.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;docker build&lt;/code&gt; command is great for helping to achieve reproducible builds for projects, where in the past developers had to rely on setting up the correct environment manually in order to get a successful build. One big drawback of &lt;code&gt;docker build&lt;/code&gt;, however, is that it can be very costly in terms of storage when running it multiple times, as each run of the command will generally leave unnamed images around. Cleanup can be straightforward, but requires continual pruning.&lt;/p&gt;

&lt;p&gt;The need to remove unused images is particularly felt when trying to develop and debug Dockerfiles. Trying to come up with a minimal set of instructions that will allow you to run your processes the way that you want can require several &lt;code&gt;docker build&lt;/code&gt; runs, even after you've narrowed down the scope with an interactive &lt;code&gt;docker run&lt;/code&gt; session. Such a sequence may well require a few Docker image purges over the course of a session as your disk is continually overbooked by old and redundant images. This is compounded further if your Docker image makes use of a command such as &lt;code&gt;COPY . /src&lt;/code&gt;, where each change to your root project will require a new image build.&lt;/p&gt;

&lt;p&gt;This is where &lt;code&gt;docker build --replace&lt;/code&gt; comes in, where Docker automatically removes the old image with the same tag when a new copy is built, and skips the build entirely if it's up-to-date. The only problem is that this flag doesn't currently exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;docker_rbuild.sh&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;I wrote &lt;code&gt;docker_rbuild.sh&lt;/code&gt; ("Docker replace build") to approximate the idea of &lt;code&gt;docker build --replace&lt;/code&gt; by making use of the build cache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# `$0 &amp;lt;img-name&amp;gt; &amp;lt;tag&amp;gt;` builds a docker image that replaces the docker image&lt;/span&gt;
&lt;span class="c"&gt;# `&amp;lt;img-name&amp;gt;:&amp;lt;tag&amp;gt;`, or creates it if it doesn't already exist.&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# This script uses `&amp;lt;img-name&amp;gt;:cached` as a temporary tag and so may clobber&lt;/span&gt;
&lt;span class="c"&gt;# such existing images if present.&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-lt&lt;/span&gt; 2 &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"usage: &lt;/span&gt;&lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="s2"&gt; &amp;lt;img-name&amp;gt; &amp;lt;tag&amp;gt; ..."&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nv"&gt;img_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;shift
&lt;/span&gt;&lt;span class="nv"&gt;tag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;shift

&lt;/span&gt;docker tag &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$img_name&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$tag&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$img_name&lt;/span&gt;&lt;span class="s2"&gt;:cached"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null
&lt;span class="k"&gt;if &lt;/span&gt;docker build &lt;span class="nt"&gt;--tag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$img_name&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$tag&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;docker rmi &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$img_name&lt;/span&gt;&lt;span class="s2"&gt;:cached"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null
    &lt;span class="c"&gt;# We return a success code in case `rmi` failed.&lt;/span&gt;
    &lt;span class="nb"&gt;true
&lt;/span&gt;&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nv"&gt;exit_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$?&lt;/span&gt;
    docker tag &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$img_name&lt;/span&gt;&lt;span class="s2"&gt;:cached"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$img_name&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$tag&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null
    &lt;span class="nb"&gt;exit&lt;/span&gt; &lt;span class="nv"&gt;$exit_code&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tags the current copy of the image so that it can be reused for caching purposes, and then kicks off a new build. If the build was successful then the "cache" version is removed, theoretically meaning that only the latest copy of the image you're working on should be present in your system. If the build fails then the old tag is restored. If there are no updates then the cached layers are used to create a "new" image almost instantly to replace the old one.&lt;/p&gt;

&lt;p&gt;With this, local images are automatically "pruned" as newer copies are produced, saving time and disk space.&lt;/p&gt;

&lt;h2&gt;
  
  
  Idempotency
&lt;/h2&gt;

&lt;p&gt;One benefit of &lt;code&gt;docker_rbuild.sh&lt;/code&gt; is the fact that, now that &lt;code&gt;docker build&lt;/code&gt; isn't leaving redundant images around with each build, it is more practicable to use it in scripts to rebuild our images before we run them. This is useful when a project defines local images so that we can rebuild the image before it's used, every time that it's used, so that we're always using the latest version of the image without having to manually update it.&lt;/p&gt;

&lt;p&gt;An example of where this can be convenient is when you want to use an external program or project that uses a language that isn't supported by your project. For example, the build process for this blog's content uses Node.js, but consider the case where I wanted to use a Markdown linter defined in Ruby, such as &lt;a href="https://github.com/markdownlint/markdownlint"&gt;Markdownlint&lt;/a&gt;. One option is to add a Ruby installation directly to the definition of the build environment, but this has a few disadvantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It adds an installation for a full new language to the build environment just to support the running of one program.&lt;/li&gt;
&lt;li&gt;It isn't clear, at a glance, that Ruby is only being installed to support one tool, and to someone new to the project it can look like the project is a combined Node.js/Ruby project.&lt;/li&gt;
&lt;li&gt;The above point lends itself to using more Ruby gems "just because" it's available, meaning that removing the Ruby installation later becomes more difficult.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One way to work around this is to encapsulate the usage with a Dockerfile, like &lt;code&gt;markdownlint.Dockerfile&lt;/code&gt;, and a script that runs the tool:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;markdownlint.Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ruby:3.0.0-alpine3.13&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;gem &lt;span class="nb"&gt;install &lt;/span&gt;mdl

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["mdl"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;markdownlint.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-ne&lt;/span&gt; 1 &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"usage: &lt;/span&gt;&lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="s2"&gt; &amp;lt;md-file&amp;gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi
&lt;/span&gt;&lt;span class="nv"&gt;md_file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nv"&gt;proj&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ezanmoto/hello'&lt;/span&gt;
&lt;span class="nv"&gt;sub_img_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"markdownlint"&lt;/span&gt;
&lt;span class="nv"&gt;sub_img&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$proj&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;$sub_img_name&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;:/app"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--workdir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/app'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$sub_img&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$md_file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This addresses some of the above issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ruby isn't installed directly into the build environment, meaning that the build environment is kept focused and lean.&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;markdownlint.Dockerfile&lt;/code&gt;, the Ruby installation is kept with the program that it's used to run, making the association clear.&lt;/li&gt;
&lt;li&gt;The entire Ruby installation can be removed easily by deleting &lt;code&gt;markdownlint.Dockerfile&lt;/code&gt;. This can be useful if we decide to replace the tool with a different linter, like &lt;a href="https://www.npmjs.com/package/markdownlint-cli"&gt;this one written for Node.js&lt;/a&gt;. Another reason why we might remove &lt;code&gt;markdownlint.Dockerfile&lt;/code&gt; is if the external project starts maintaining its own public Docker image that can be used instead of managing a local version.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Despite the benefits, there are two subtle issues with this setup. The first is that &lt;code&gt;ezanmoto/blog_content.markdownlint&lt;/code&gt; will need to be built somehow before &lt;code&gt;markdownlint.sh&lt;/code&gt; can be run, which may be a manual process, and it would also be a surprising error to find out that an image is missing for a script.&lt;/p&gt;

&lt;p&gt;The second issue is that if one developer builds the local image, and a second developer updates the image definition, the first developer will need to rebuild their copy of the local image before running &lt;code&gt;markdownlint.sh&lt;/code&gt; again or risk&lt;br&gt;
unexpected results.&lt;/p&gt;

&lt;p&gt;We can solve both of these issues by running &lt;code&gt;docker_rbuild.sh&lt;/code&gt; before running &lt;code&gt;ezanmoto/blog_content.markdownlint&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;markdownlint.sh&lt;/code&gt;:&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash scripts/docker_rbuild.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$sub_img&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'latest'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$sub_img_name&lt;/span&gt;&lt;span class="s2"&gt;.Dockerfile"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;.&lt;/span&gt;

docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;:/app"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--workdir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/app'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$sub_img&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$md_file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;This causes the image to be always be rebuilt before it's used, meaning that we're always working with the latest version of the image, and this build step will most often be skipped due to caching (though attention should be paid to&lt;br&gt;
the commands used in the image build, as the use of commands like &lt;code&gt;COPY&lt;/code&gt; can limit the effectiveness of the cache).&lt;/p&gt;
&lt;h2&gt;
  
  
  Use With &lt;code&gt;docker-compose&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;I find &lt;code&gt;docker-compose&lt;/code&gt; particularly useful for modelling deployments. However, like developing Docker images, getting the &lt;code&gt;docker-compose&lt;/code&gt; environment correct can require continual fine-tuning of Docker images, especially for defining minimal environments. This can again result in lots of wasted space, especially when used with &lt;code&gt;docker-compose up --build&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With that in mind, I now remove the &lt;code&gt;build&lt;/code&gt; property from services defined in &lt;code&gt;docker-compose.yml&lt;/code&gt;. This then requires the images to be built before &lt;code&gt;docker-compose&lt;/code&gt; is called, which I normally handle in a script that will build all of the images used in the &lt;code&gt;docker-compose.yml&lt;/code&gt; file before the file is called:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2.4'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;hello.seankelleher.local&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.19.7-alpine&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8080:8080&lt;/span&gt;
        &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./configs/hello.conf:/etc/nginx/conf.d/hello.conf:ro&lt;/span&gt;

    &lt;span class="na"&gt;hello&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ezanmoto/hello&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;scripts/docker_compose_up_build.sh&lt;/code&gt;:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; errexit

&lt;span class="nv"&gt;proj&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ezanmoto/hello'&lt;/span&gt;
&lt;span class="nv"&gt;run_img&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$proj&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

bash scripts/docker_rbuild.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$run_img&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'latest'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;.&lt;/span&gt;

docker-compose up &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Having an idempotent rebuild for Docker images means that it's more feasible to rebuild before each run, much in the same way that some build tools (e.g. &lt;code&gt;cargo&lt;/code&gt;) update any changed dependencies before attempting to rebuild the codebase. While Docker doesn't have native support for this at present, a script that takes advantage of the cache can be used to simulate such behaviour.&lt;/p&gt;




&lt;p&gt;This article was originally &lt;a href="https://seankelleher.ie/posts/docker_rbuild/"&gt;published&lt;/a&gt; on &lt;a href="https://seankelleher.ie/"&gt;seankelleher.ie&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
