<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cristian Medina</title>
    <description>The latest articles on DEV Community by Cristian Medina (@tryexceptpass).</description>
    <link>https://dev.to/tryexceptpass</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tryexceptpass"/>
    <language>en</language>
    <item>
      <title>12 Trending Alternatives to Distribute Python Applications in 2020</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Tue, 10 Dec 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/12-trending-alternatives-to-distribute-python-applications-in-2020-2b12</link>
      <guid>https://dev.to/tryexceptpass/12-trending-alternatives-to-distribute-python-applications-in-2020-2b12</guid>
      <description>&lt;p&gt;One of the more prevalent topics in the Python ecosystem of 2019 was that of packaging and distribution. As the year comes to an end, I wanted to put together a summary of the many paths we currently have available to distribute apps built with Python. Though some of these also apply to any language.&lt;/p&gt;

&lt;p&gt;Whether delivering an executable, a virtual environment, your packaged code, or a full application, the following list includes both standard systems and some up-and-comers to keep in mind as we enter 2020.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applications
&lt;/h2&gt;

&lt;p&gt;To distribute an application, you need more than just a library to pip-install from PyPI. You require a fool-proof way of installing it and its dependencies in all supported operating systems. These dependencies can include other non-Python packages as well as external resources like binaries or images.&lt;/p&gt;

&lt;p&gt;Below are some options to help install and distribute code across platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker
&lt;/h3&gt;

&lt;p&gt;Docker uses base functionality in an operating system to isolate a process in such a way that it's unaware of the rest of the system. It can run the kernel process of a different OS inside your host, much like virtualization, but without using the virtual hardware. &lt;/p&gt;

&lt;p&gt;Its file system is layered, giving it a minimal footprint that only incorporates the files needed to run, instead of the typical virtual disk that also includes the free space in a virtual machine along with it.&lt;/p&gt;

&lt;p&gt;With a minimally packaged root file system, it's not uncommon to see the image for an entire OS only occupy tens of MB, instead of the GB needed for a virtual machine. &lt;/p&gt;

&lt;p&gt;You can distribute these containers in a public registry like &lt;a href="https://hub.docker.com"&gt;DockerHub&lt;/a&gt; or a private one inside your org. Users install the Docker daemon on their own compute, then use it to pull your image and run it locally.&lt;/p&gt;

&lt;p&gt;Since Docker images are nothing more than a root file system, it's also possible to distribute them as a file, using Docker to import it.&lt;/p&gt;

&lt;p&gt;To build an image, you can start from a rootfs, or add on top of an existing registry image. Most operating system vendors maintain official stripped-down images in DockerHub, which are usually tiny.&lt;/p&gt;

&lt;p&gt;Other organizations also make official images for new builds of their applications, like the &lt;a href="https://hub.docker.com/_/python"&gt;Python&lt;/a&gt; image built on top of Debian. Postgress, MySQL, Redis, Nginx, and many other standard services do the same.&lt;/p&gt;

&lt;p&gt;Docker now runs on Linux, OSX, and Windows. This cross-platform support means that any image you build has a wide distribution with minimal complexity. You control not only the application but also the environment it runs on, making compatibility much less of an issue.&lt;/p&gt;

&lt;p&gt;However, complexity can exist when configuring the network or persistent storage. The typical application doesn't have to deal with anything more than port forwarding, but it's sometimes hard to visualize how the abstraction layers work. It helps to budget for better documentation around it.&lt;/p&gt;

&lt;p&gt;With Docker, you can control the Python distribution, the supporting OS packages - like C libraries needed for individual modules - and your virtual environment.&lt;/p&gt;

&lt;p&gt;Since it takes milliseconds for a container to start, it's perfectly acceptable, even encouraged, to run a fresh container every time you wish to execute your app.&lt;/p&gt;

&lt;p&gt;Some people even use Docker as a virtual environment replacement. Since it starts up quickly and can offer an interactive shell, it's not a bad idea to make a new container whenever you need to work on a particular project.&lt;/p&gt;

&lt;p&gt;Once you have a running container configured with the basics, you can save the image for reuse or even export it to a file.&lt;/p&gt;

&lt;p&gt;For example, the following command starts a new Python container, mounts the current working directory as &lt;code&gt;/work&lt;/code&gt;, and drops you into a bash shell. Changes made inside the container directory reflect in the base directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; python-work &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;:/work python:3-slim bash
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once inside the container, you can install whatever Apt or PyPI packages are needed.&lt;/p&gt;

&lt;p&gt;Exiting the shell returns you to the host. At which point, running this next command saves any changes as a new Docker image that you can reuse later or push to DockerHub for distribution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker commit python-work my-python-image
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It's possible to share the new image in a private Docker Registry internal to your organization, or run the following to export the container as a file that anyone can download and import into their local Docker environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;export &lt;/span&gt;python-work &lt;span class="nt"&gt;-o&lt;/span&gt; my-python-work.dock
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;However, the real workflow for distribution is to create a &lt;code&gt;Dockerfile&lt;/code&gt; that anyone can use to create the image themselves. In other words, you give out a small text file with instructions for the Docker daemon, instead of a copy of the entire filesystem.&lt;/p&gt;

&lt;p&gt;Anyone could clone your repo with that file and run this command to create a local version of the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; my-python-image &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;A typical Dockerfile looks like below, please look through their documentation for more info:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM Python:3-slim

COPY path/to/your/app /work

WORKDIR /work
RUN pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://docs.docker.com/"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Virtual Machines and Vagrant
&lt;/h3&gt;

&lt;p&gt;The next step up from containers is to distribute a full virtual machine. This type of system has been around for a while since virtualization became widespread and hardware supported.&lt;/p&gt;

&lt;p&gt;Delivering a "virtual appliance" is attractive because you have close to total control of the environment in which your app runs. Everything is configurable: the operating system, its packages, disks, and networking, even the amount of free space.&lt;/p&gt;

&lt;p&gt;The drawback of using VMs is the size of your distribution, typically in the gigs. Plus, you'll have to work out a mechanism for getting the image to your customers. Things like Amazon S3 or Digital Ocean Spaces are a great place to start.&lt;/p&gt;

&lt;p&gt;In the beginning, only server hardware supported running virtual machines, but these days every processor has the capability, and all major operating systems support it. There are also free applications to help you manage and configure VMs like Oracle's VirtualBox.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://vagrantup.com"&gt;Vagrant&lt;/a&gt; is another system for configuring and running VMs on top of managers like VirtualBox. It functions a lot like Docker in that you specify everything the VM needs in a file, and it takes care of building and running it for you.&lt;/p&gt;

&lt;p&gt;Similar to the Dockerfile in the previous section, Hashicorp's Vagrant uses a &lt;code&gt;Vagrantfile&lt;/code&gt; with instructions on how to start and configure a virtual machine.&lt;/p&gt;

&lt;p&gt;Just like a Docker image provides the base file system to run a container, a Vagrant &lt;em&gt;box&lt;/em&gt; provides the basis for a virtual machine.&lt;/p&gt;

&lt;p&gt;The example Vagrantfile below does something similar to the Dockerfile in the previous section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Vagrant.configure("2") do |config|
 config.vm.box = "bento/debian-10.2"
 config.vm.synced_folder "./", "/work"
 config.vm.provision "shell",
 inline: "pip install -r /work/requirements.txt
 keep_color: true
end
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Cloning your repository with this file and running these commands starts the VM and gets you into its shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant up
vagrant ssh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Vagrant does help the distribution problem by providing a similar experience to DockerHub with the &lt;a href="https://app.vagrantup.com/boxes/search"&gt;Vagrant Box catalog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With it, you can get your base images or upload new ones to share with others. You can even point a Vagrantfile to internal URLs to download a box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.vagrantup.com/intro/index.html"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  PyInstaller
&lt;/h3&gt;

&lt;p&gt;We've written about this module &lt;a href="https://dev.to/article/package-python-as-executable/"&gt;before&lt;/a&gt;. PyInstaller takes care of bundling all resources required to run your application, including the Python distribution. Crawling your code, it figures out which Python dependencies to package, while still allowing you to specify additional assets to include with the bundle.&lt;/p&gt;

&lt;p&gt;The result is an installable application for either Windows, Linux, or OSX. During execution, it unpacks it onto a folder, along with the bundled interpreter, and runs your entrypoint script. &lt;/p&gt;

&lt;p&gt;It's flexible enough to give you control over the Python distribution and the execution environment. I've even successfully used it to bundle browsers with my Python packages.&lt;/p&gt;

&lt;p&gt;But using it does bring a complication. When extracting itself, it changes the base directory from which your application runs. Meaning, any code dependent on &lt;code&gt;__file__&lt;/code&gt; to determine the current execution path now needs to use the internal environment that PyInstaller configures.&lt;/p&gt;

&lt;p&gt;Distributing your bundle is entirely up to you. Most people choose object stores and CDNs for this type of setup.&lt;/p&gt;

&lt;p&gt;A point to keep in mind when distributing this way is to check whether your code needs to validate the OS environment it's running on.&lt;/p&gt;

&lt;p&gt;In other words, if you need a specific &lt;code&gt;apt&lt;/code&gt; package installed, unlike using Docker or Vagrant, there's no guarantee that the package is already there at the time of execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pyinstaller.org/"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Briefcase
&lt;/h3&gt;

&lt;p&gt;Briefcase is an up-and-comer in this category. It's a part of the &lt;a href="https://beeware.org/"&gt;Beeware &lt;/a&gt; project that aims to enable the packaging of Python applications for distribution to all operating systems and devices, including Android and iOS.&lt;/p&gt;

&lt;p&gt;It's in a similar arena as PyInstaller, meaning it can bundle your module along with its dependencies into an installable application. But it also adds support for mobile devices and TVs with AppleTV or tvOS.&lt;/p&gt;

&lt;p&gt;Unfortunately, documentation is still a little lax and mostly in the form of examples. However, the project shows promise and is still under active development. The popular editor &lt;a href="https://codewith.mu/"&gt;Mu&lt;/a&gt; uses it for packaging.&lt;/p&gt;

&lt;p&gt;You can submit applications built with Briefcase to the Android and Apple App Stores for distribution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://beeware.org/project/projects/tools/briefcase/"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual Environments
&lt;/h2&gt;

&lt;p&gt;Sometimes, it's possible to assume that your users have a standard operating system. Maybe they're all running from a stock image built by the IT department inside your company. Maybe your app is just simple enough to not worry about OS or interpreter complexities.&lt;/p&gt;

&lt;p&gt;If all you need to think about is your Python code and its dependent libraries, this category is for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pex
&lt;/h3&gt;

&lt;p&gt;Built by the folks at Twitter, Pex is a way of distributing an entire virtual environment along with your Python application. Designed to use a pre-installed Python interpreter, it leverages the standard built for &lt;em&gt;Python Zip Applications&lt;/em&gt; outlined in &lt;a href="https://legacy.python.org/dev/peps/pep-0441/"&gt;PEP-441&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since Python 2.6, the interpreter has the ability to execute directories or zip-format archives as scripts.&lt;/p&gt;

&lt;p&gt;Pex builds on top of that, simplifying distribution to the act of copying a single file. These files can work across platforms (OSX, Linux, Windows), as well as different interpreters. Though there are some limitations when using modules with C bindings.&lt;/p&gt;

&lt;p&gt;After installing Pex in your base system, you can use it to produce a single file containing your entire Python environment.&lt;/p&gt;

&lt;p&gt;Pass that file to your coworker's computer, and you'll be able to execute it there without installing anything other than the base Python interpreter.&lt;/p&gt;

&lt;p&gt;You can even run a file in interpreter mode, such that it opens a Python REPL with all the necessary modules in your environment available for import.&lt;/p&gt;

&lt;p&gt;Freezing your virtual environment into a &lt;code&gt;.pex&lt;/code&gt; file is easy enough:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;pex &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt &lt;span class="nt"&gt;-o&lt;/span&gt; virtualenv.pex
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then you can execute that file to open a REPL with your environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;./virtualenv.pex
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can also specify entrypoints when creating the file so that it executes a specific function in your module, behaving as if you run the python command directly.&lt;/p&gt;

&lt;p&gt;Here's a &lt;a href="https://m.youtube.com/watch?v=NmpnGhRwsu0"&gt;great 15 min video&lt;/a&gt; with an example that bundles a simple Flask app.&lt;/p&gt;

&lt;p&gt;There's no system to distribute Pex files for you, so just like the previous items, you're stuck with public object stores and CDNs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pantsbuild/pex"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Shiv
&lt;/h3&gt;

&lt;p&gt;Similar to Pex, the folks at LinkedIn built a different module called Shiv. Their main reason for building something different was to try and tackle a problem in the start time of Pex executables. Given the complexity of their repositories and dependencies, they wanted to handle the environment setup differently.&lt;/p&gt;

&lt;p&gt;Instead of bundling wheels along with the application, Shiv includes an entire site-packages directory as installed by pip. It makes everything work right out of the box and almost twice as fast as Pex.&lt;/p&gt;

&lt;p&gt;Here's an example of how to produce a &lt;code&gt;.pyz&lt;/code&gt; file with Shiv that does something similar to the Pex section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;shiv &lt;span class="nt"&gt;--compressed&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; virtualenv.pyz &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Which you can then execute directly with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./virtualenv.pyz
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It's important to note that packaging libraries with OS dependencies is not cross-compatible between platforms. As mentioned in the Pex section, it's mainly an issue for modules that depend on lower-level C libraries. You'll have to produce different files for each platform.&lt;/p&gt;

&lt;p&gt;Again, there's no system built for you to distribute these files, so you'll have to rely on AWS, DO, CDNs, or other artifact stores like JFrog's Artifactory or Sonatype's Nexus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/linkedin/shiv"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pipx
&lt;/h3&gt;

&lt;p&gt;While not a method to build applications or distribute them, Pipx offers a different way to install them. It works with your OS to isolate the virtual environments and its dependencies, closer to what a system like Homebrew does for OSX.&lt;/p&gt;

&lt;p&gt;Pipx provides an easy way to install packages into isolated environments and expose their command line entrypoints globally. It also provides a mechanism to list, upgrade, and uninstall those packages without getting into the virtual environment details.&lt;/p&gt;

&lt;p&gt;A good example is the use of a linter. Let's say you work on multiple python applications, each of which has a separate virtualenv, and you wish to perform the same linting operations across all of them using &lt;code&gt;flake8&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Instead of installing the flake8 module into every virtualenv, you can use Pipx to install a system-wide flake8 command that's available inside each of those environments but runs in its own entirely separate and isolated environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pypi.org/project/pipx/"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Single-file Executables
&lt;/h2&gt;

&lt;p&gt;Sometimes you want to give your customers an executable that doesn't require pre-installed libraries to run, as you would with Docker or Pex.&lt;/p&gt;

&lt;p&gt;The mechanisms described here help you accomplish that. And just like the previous category, they all require some form of object or artifact store to help with distribution.&lt;/p&gt;

&lt;h3&gt;
  
  
  PyInstaller
&lt;/h3&gt;

&lt;p&gt;While we already discussed PyInstaller, it's worth another mention in this category, because this is one of its primary functions.&lt;/p&gt;

&lt;p&gt;It can produce a single-file executable of your entire application with all its dependencies bundled. You can make one for each operating system, and it runs just like any other native application.&lt;/p&gt;

&lt;h3&gt;
  
  
  PyOxidizer
&lt;/h3&gt;

&lt;p&gt;One of the newest additions to the packaging and distribution arena, PyOxidizer, is very promising. It takes advantage of the packaging tools created for the Rust programming language.&lt;/p&gt;

&lt;p&gt;Much like PyInstaller, you have complete control of everything you want to bundle into it, but it also allows you to execute code much like Pex or Shiv. Meaning you can create your package so that it runs as a REPL with all dependencies pre-installed for you.&lt;/p&gt;

&lt;p&gt;Distributing an entire Python environment that includes the REPL makes for some exciting applications, especially with research teams or scientific computing that require several packages to do data exploration.&lt;/p&gt;

&lt;p&gt;One advantage over PyInstaller is that instead of extracting out to the file system, it extracts itself into memory, making the start-up time of your actual Python application considerably faster.&lt;/p&gt;

&lt;p&gt;This feature comes with a similar drawback to PyInstaller. You have to adjust any internal references to &lt;code&gt;__file__&lt;/code&gt; or similar operations, so they rely on the environment configured by PyOxidizer at runtime.&lt;/p&gt;

&lt;p&gt;More details available in their &lt;a href="https://pyoxidizer.readthedocs.io/en/latest/"&gt;official documentation&lt;/a&gt;, but we also wrote more about it in &lt;a href="https://dev.to/article/package-python-as-executable/"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nuitka
&lt;/h3&gt;

&lt;p&gt;Beyond executing your Python code with a bundled interpreter, you also have the option of compiling your code down to C. This comes with several advantages, including the possibility of faster execution given that the C compiler can perform optimizations that the interpreter cannot.&lt;/p&gt;

&lt;p&gt;Nuitka is a system built for compiling Python code to C. While the concept is similar to the more widely known Cython, it's not a separate language. And it's capable of doing things that Cythong can't, like crawling your dependencies and compiling everything down to one binary.&lt;/p&gt;

&lt;p&gt;The resulting executable runs as is much faster, in native code, and never needing extraction.&lt;/p&gt;

&lt;p&gt;Compilation can get complicated, especially when considering platform complexities. But if you budget the time for it, you'll be able to reap the benefits. I've done it successfully several times before.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nuitka.net/"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  App Store Experiences
&lt;/h2&gt;

&lt;p&gt;Other systems for distributing applications we use very successfully almost every day: app stores. This software exists to install and maintain other applications.&lt;/p&gt;

&lt;p&gt;Just like the Apple App Store or the Google Play Store, there are similar mechanisms available in Linux that enable simple integrations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapcraft
&lt;/h3&gt;

&lt;p&gt;Snapcraft provides the standard app store experience. You can publish your application to their store, where users can discover and install it.&lt;/p&gt;

&lt;p&gt;The installation is isolated to avoid conflicts with other applications, and it works across Linux distributions, including library dependencies. &lt;/p&gt;

&lt;p&gt;Once installed, the store automatically keeps your application at the latest stable version and provides a mechanism to revert to previous states while preserving data.&lt;/p&gt;

&lt;p&gt;Ubuntu manages the store, so after bundling your application (or &lt;em&gt;snap&lt;/em&gt; as they call it), you'll have to publish the snap to the store with a registered Ubuntu One account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://snapcraft.io/"&gt;More details&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Flatpak
&lt;/h3&gt;

&lt;p&gt;Another very similar concept to Snapcraft is Flatpak.&lt;/p&gt;

&lt;p&gt;It also presents a store-like experience with FlatHub.org using container technologies to provide isolation to your application. Still, you can also host your own private hub, or distribute bundles in a single file.&lt;/p&gt;

&lt;p&gt;Flatpak bundles are can also make use of some desktop integration capabilities. These provide information like locality detection, the ability to access resources external to the app (much like your phone asks for permissions to open files or URLs), notifications, window decorations, and others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://flatpak.org/"&gt;More details&lt;/a&gt; and the instructions to create your first Flatpak are &lt;a href="http://docs.flatpak.org/en/latest/first-build.html#"&gt;available here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summarizing
&lt;/h2&gt;

&lt;p&gt;We have a full, feature-rich ecosystem of application packaging and distribution mechanisms. Most of these are not specific to the Python language but easily integrate with it.&lt;/p&gt;

&lt;p&gt;While it seems like many options, hopefully, the categorization applied here should help you pick what best suits your needs, given the pieces you can control.&lt;/p&gt;




&lt;h2&gt;
  
  
  Learn More!
&lt;/h2&gt;

&lt;p&gt;Subscribe to the &lt;a href="https://tinyurl.com/tryexceptpass-signup"&gt;tryexceptpass.org mailing list&lt;/a&gt; for more content about Python, Docker, open source, and our experiences with enterprise software engineering.&lt;/p&gt;

</description>
      <category>python</category>
      <category>continuousdelivery</category>
    </item>
    <item>
      <title>Unconventional Secure and Asynchronous RESTful APIs using SSH</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Fri, 15 Nov 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/unconventional-secure-and-asynchronous-restful-apis-using-ssh-1i11</link>
      <guid>https://dev.to/tryexceptpass/unconventional-secure-and-asynchronous-restful-apis-using-ssh-1i11</guid>
      <description>&lt;p&gt;Some time ago, in a desperate search for asynchronicity, I came across a Python package that changed the way I look at remote interfaces: &lt;a href="http://asyncssh.readthedocs.io/en/latest/"&gt;AsyncSSH&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Reading through their documentation and example code, you’ll find an interesting assortment of use cases. All of which take advantage of the authentication and encryption capabilities of SSH, while using Python’s &lt;code&gt;asyncio&lt;/code&gt; to handle asynchronous communications.&lt;/p&gt;

&lt;p&gt;Thinking about various applications I’ve developed over the years, many included functions that could benefit from decoupling into separate services. But at times, I would avoid it due to security implications.&lt;/p&gt;

&lt;p&gt;I wanted to build informative dashboards that optimize maintenance tasks. But they bypassed business logic, so I wouldn’t dare expose them over the same interfaces. I even looked at using HTTPS client certs, but support from REST frameworks seemed limited.&lt;/p&gt;

&lt;p&gt;I realized that &lt;code&gt;asyncssh&lt;/code&gt; could provide the extra security I was looking for over a well known key-based system. And in my never-ending quest to find what makes things tick, I decided to take a stab at writing a REST-ish service over SSH.&lt;/p&gt;

&lt;p&gt;A great way to familiarize myself with the library and the protocol, it helped me learn more about building asynchronous apps, creating a small framework called &lt;a href="https://github.com/tryexceptpass/korv"&gt;korv&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tryexceptpass.org/article/secure-asynchronous-apis-using-ssh/"&gt;Read On...&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Painless Status Reporting in GitHub Pull Requests - Designing CI/CD Systems</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Mon, 14 Oct 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/painless-status-reporting-in-github-pull-requests-designing-ci-cd-systems-42e2</link>
      <guid>https://dev.to/tryexceptpass/painless-status-reporting-in-github-pull-requests-designing-ci-cd-systems-42e2</guid>
      <description>&lt;p&gt;Continuing the build service discussion from the &lt;a href="https://dev.to/tryexceptpass/designing-continuous-build-systems-6d9"&gt;Designing CI/CD Systems&lt;/a&gt; series, we’re now at a good point to look at reporting status as code passes through the system.&lt;/p&gt;

&lt;p&gt;At the very minimum, you want to communicate build results to our users, but it’s worth examining other steps in the process that also provide useful information.&lt;/p&gt;

&lt;p&gt;The code for reporting status isn’t a major feat. However, using it to enforce build workflows can get complicated when implemented from scratch.&lt;/p&gt;

&lt;p&gt;Since the pipeline covered so far in earlier articles already integrates with GitHub, it’s much easier to simplify things by taking advantage of GitHub’s features. Specifically, we can use the Status API to convey information directly into pull requests, and use repository settings to gate merges based on those statuses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reporting status to GitHub
&lt;/h2&gt;

&lt;p&gt;We had a brief discussion of this API in a previous article about &lt;a href="https://tryexceptpass.org/article/pytest-github-integration/"&gt;integrating pytest results with GitHub&lt;/a&gt;. It also covered GitHub Apps and how to authenticate them into the REST API. Today we’ll discuss more details about the Status API itself, keeping in mind a pre-existing App.&lt;/p&gt;

&lt;p&gt;Reporting pull request status is a simple HTTP POST request to the status endpoint of the relevant PR. You can find that URL as part of the webhook event details related to the PR.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tryexceptpass.org/article/continuous-builds-reporting-status/"&gt;Read On...&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ci</category>
      <category>continuousdelivery</category>
      <category>github</category>
    </item>
    <item>
      <title>Command Execution Tricks with Subprocess - Designing CI/CD Systems</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Mon, 23 Sep 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/command-execution-tricks-with-subprocess-designing-ci-cd-systems-5fg3</link>
      <guid>https://dev.to/tryexceptpass/command-execution-tricks-with-subprocess-designing-ci-cd-systems-5fg3</guid>
      <description>&lt;p&gt;The most crucial step in any continuous integration process is the one that executes build instructions and tests their output. There’s an infinite number of ways to implement this step ranging from a simple shell script to a complex task system.&lt;/p&gt;

&lt;p&gt;Keeping with the principles of simplicity and practicality, today we’ll look at continuing the series on &lt;a href="https://tryexceptpass.org/designing-continuous-build-systems"&gt;Designing CI/CD Systems&lt;/a&gt; with our implementation of the execution script.&lt;/p&gt;

&lt;p&gt;Previous chapters in the series already established the &lt;a href="https://tryexceptpass.org/article/continuous-builds-parsing-specs/"&gt;build directives&lt;/a&gt; to implement. They covered the format and location of the build specification file. As well as the &lt;a href="https://tryexceptpass.org/article/continuous-builds-docker-swarm"&gt;docker environment&lt;/a&gt; in which it runs and its limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execution using subprocess
&lt;/h2&gt;

&lt;p&gt;Most directives supplied in the YAML spec file are lists of shell commands. So let's look at how Python's &lt;a href="https://docs.python.org/3/library/subprocess.html"&gt;subprocess&lt;/a&gt; module helps us in this situation.&lt;/p&gt;

&lt;p&gt;We need to execute a command, wait for it to complete, check the exit code, and print any output that goes to stdout or stderr. We have a choice between &lt;code&gt;call()&lt;/code&gt;, &lt;code&gt;check_call()&lt;/code&gt;, &lt;code&gt;check_output()&lt;/code&gt;, and &lt;code&gt;run()&lt;/code&gt;, all of which are wrappers around a lower-level &lt;code&gt;popen()&lt;/code&gt; function that can provide more granular process control.&lt;/p&gt;

&lt;p&gt;This &lt;code&gt;run()&lt;/code&gt; function is a more recent addition from Python 3.5. It provides the necessary execute, block, and check behavior we're looking for, raising a &lt;code&gt;CalledProcessError&lt;/code&gt; exception whenever it finds a failure.&lt;/p&gt;

&lt;p&gt;Also of note, the &lt;a href="https://docs.python.org/3/library/shlex.html"&gt;shlex&lt;/a&gt; module is a complimentary library that provides some utilities to aid you in making subprocess calls. It provides a &lt;code&gt;split()&lt;/code&gt; function that's smart enough to properly format a list given a command-line string. As well as &lt;code&gt;quote()&lt;/code&gt; to help &lt;em&gt;escape&lt;/em&gt; shell commands and avoid shell injection vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security considerations
&lt;/h2&gt;

&lt;p&gt;Thinking about this for a minute, realize that you're writing an execution system that runs command-line instructions as written by a third party.  It has significant security implications and is the primary reason why most online build services do not let you get down into this level of detail.&lt;/p&gt;

&lt;p&gt;So what can we do to mitigate the risks?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tryexceptpass.org/article/continuous-builds-subprocess-execution/"&gt;Read On ...&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>subprocess</category>
      <category>ci</category>
      <category>continuousdelivery</category>
    </item>
    <item>
      <title>Designing Continuous Build Systems: Docker Swarm Orchestration</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Fri, 23 Aug 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/designing-continuous-build-systems-docker-swarm-orchestration-5cbg</link>
      <guid>https://dev.to/tryexceptpass/designing-continuous-build-systems-docker-swarm-orchestration-5cbg</guid>
      <description>&lt;p&gt;Building code is sometimes as simple as executing a script. But a full-featured build system requires a lot more supporting infrastructure to handle multiple build requests at the same time, manage compute resources, distribute artifacts, etc.&lt;/p&gt;

&lt;p&gt;After our last chapter discussing build events, this next iteration in the &lt;a href="https://tryexceptpass.org/article/continuous-builds-1/"&gt;continuous builds&lt;/a&gt; series covers how to spin-up a container inside Docker Swarm to run a build and test it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Swarm
&lt;/h2&gt;

&lt;p&gt;When running the Docker engine in &lt;a href="https://docs.docker.com/engine/swarm/"&gt;Swarm mode&lt;/a&gt; your effectively creating a cluster. Docker will manage a number of compute nodes and their resources, scheduling work across them that runs inside containers.&lt;/p&gt;

&lt;p&gt;It handles scaling across nodes while maintaining overall cluster state, such that you can adjust how many worker containers are running in the cluster, automatically failover when nodes go offline, etc.&lt;/p&gt;

&lt;p&gt;It also builds the networking hooks necessary so that containers can communicate with each other across multiple host nodes. It does load balancing, rolling updates, and a number of other functions you would expect from cluster technologies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster setup
&lt;/h3&gt;

&lt;p&gt;When compute hosts form part of a Docker Swarm, they can either run in manager mode or as regular worker nodes. The nodes are able to host containers as directed by the managers.&lt;/p&gt;

&lt;p&gt;One Swarm can have multiple managers, and the managers themselves can also host containers. Their job is to track the state of the cluster and spin-up containers across nodes as needed. This allows for redundancy across the cluster such that you can loose one or more managers or nodes and keep basic operations running. More details on this are available in the official &lt;a href="https://docs.docker.com/engine/swarm/key-concepts/"&gt;Docker Swarm documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To create a swarm you need to run the following command on your first manager (which also serves as your first node).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker swarm init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The previous command will tell you what to run in each of your nodes in order to join that swarm. It usually looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker swarm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="nt"&gt;--token&lt;/span&gt; SOME-TOKEN SOME_IP:SOME_PORT
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The docker daemon listens on a &lt;code&gt;unix&lt;/code&gt; socket located at &lt;code&gt;/var/run/docker.sock&lt;/code&gt; by default. This is great for local access, but if you need remote access, you'll have to enable &lt;code&gt;tcp&lt;/code&gt; sockets explicitly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enabling remote access
&lt;/h3&gt;

&lt;p&gt;Docker Swarm provides a good API for managing services, but for our particular use case, we need a feature that's not available at the swarm-level and requires individual access to the nodes. Part of the reason is because we're using Swarm for a special case that it wasn't built for: running a one-off short-lived container - more on this later.&lt;/p&gt;

&lt;p&gt;You'll have to enable remote access on your node daemons in order to connect to them directly. Doing so seems to vary a bit between Linux versions, distributions and the location of docker config files. However, the main objective is the same: you must add a &lt;code&gt;-H tcp://IP_ADDRESS:2375&lt;/code&gt; option into the daemon service execution, where &lt;code&gt;IP_ADDRESS&lt;/code&gt; is the interface it listens on.&lt;/p&gt;

&lt;p&gt;You'll find that most examples set it to &lt;code&gt;0.0.0.0&lt;/code&gt; so that anyone can connect to it, but I would recommend limitting it to a specific address for better security - more below.&lt;/p&gt;

&lt;p&gt;I was on an Ubuntu image that used the file in &lt;code&gt;/lib/systemd/system/docker.service&lt;/code&gt; to define the daemon options. You just find the line that starts with &lt;code&gt;ExecStart=...&lt;/code&gt; or has a &lt;code&gt;-H&lt;/code&gt; in it, and add the extra -H option mentioned previously.&lt;/p&gt;

&lt;p&gt;Don't forget you have to reload daemon configs and restart docker after making the change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;service docker restart
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I've seen other distributions that track these settings under &lt;code&gt;/etc/default/docker&lt;/code&gt;, and yet another that uses a file in &lt;code&gt;/etc/systemd/system/docker.service.d/&lt;/code&gt;. You should google for &lt;code&gt;docker daemon enable tcp&lt;/code&gt; or &lt;code&gt;docker daemon enable remote api&lt;/code&gt; paired with your OS flavor to be sure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security implications
&lt;/h3&gt;

&lt;p&gt;Given the nature of what you can do with Docker, it's important to point out that enabling TCP sockets for remote access is a very serious security risk. It basically opens your system to remote code execution because anyone could connect to that socket and start or stop containers, view logs, modify network resources, etc.&lt;/p&gt;

&lt;p&gt;To mitigate this, you'll want to enable the use of certificate validation along with TCP sockets. This makes the daemon validate that the HTTPS certificate used by potential clients is signed by a pre-defined certificate authority (CA).&lt;/p&gt;

&lt;p&gt;You create the certificate authority and sign any client certs before distributing them to the compute systems that will perform the orchestration - usually your build service.&lt;/p&gt;

&lt;p&gt;Steps on how to generate the certificates, perform the signing and enable the verification options are available in the Docker documentaion for &lt;a href="https://docs.docker.com/engine/security/https/"&gt;protecting the daemon&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Services, Tasks and Containers
&lt;/h3&gt;

&lt;p&gt;When running in swarm mode, Docker terminology changes a little. You're no longer concerned only with containers and images, but also with &lt;code&gt;tasks&lt;/code&gt; and &lt;code&gt;services&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A service defines all the pieces that makeup an application running in the cluster. These pieces are tasks, and each task is a definition for a container.&lt;/p&gt;

&lt;p&gt;For example, if you have Application ABC that runs a Flask API that you wish to load balance across two nodes, you define one ABC service with two tasks. The swarm takes care of keeping them running in two nodes (even if there are more nodes in the cluster) and also configures the network so that the service is available over the same port regardless of the node your connected to.&lt;/p&gt;

&lt;p&gt;The number of tasks to run are part of a replication strategy that the swarm uses to determine how many copies of the tasks to keep running in the cluster. Not only can you set it to a specific number, but you can also configure it in a global mode, that runs a copy of the tasks in every node of the swarm.&lt;/p&gt;

&lt;p&gt;These concepts are simple in principle, but can get tricky when you're trying to do more complicated things later. So I recommend you check out the &lt;a href="https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/"&gt;service documentation&lt;/a&gt; for more information on how it all works.&lt;/p&gt;

&lt;p&gt;Using this terminology to describe our use case: for every new build request, you'll run a new service in the swarm that contains one instance of a task and one container that performs the build. The swarm scheduler will take care of provisioning it in whatever node is available. The container should delete itself ones the work completes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Alternatives
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier, Docker Swarm and its concepts are meant for maintaining long-running replicated services inside a cluster. But we have the very specific case of single-copy ephimeral services executed for every build.&lt;/p&gt;

&lt;p&gt;We don't care about high availability or load balancing features, we want it for its container scheduling capabilities.&lt;/p&gt;

&lt;p&gt;While simple to do, since it wasn't built for this, it can feel like we're forcing things together. So another option is to build our own scheduler (or use an existing one) and have it execute work inside containers.&lt;/p&gt;

&lt;p&gt;This isn't hard with existing task systems similar to Celery or &lt;a href="https://dramatiq.io/"&gt;Dramatiq&lt;/a&gt; that use work queues like RabbitMQ to distribute container management tasks.&lt;/p&gt;

&lt;p&gt;Along the same lines you can reuse distributed compute systems for the same goal. I've deployed &lt;a href="https://docs.dask.org/en/latest/"&gt;Dask&lt;/a&gt; successfully for this purpose. It even simplified some workflows and enabled others that wouldn't be possible otherwise.&lt;/p&gt;

&lt;p&gt;I know I've said this many times before, but I'll say it again. Like most choices in software engineering (and in life), you're always exchanging one set of problems for another. In this case, you exchange workflow complexity for infrastructure and maintenance complexity because you now have to keep worker threads running and listening to queues, as well as handle updates to those threads as your software evolves. This is completely abstracted for you by using a Docker Swarm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Orchestration with Python
&lt;/h2&gt;

&lt;p&gt;The official Python library maintained by the Docker folks is the &lt;a href="https://docker-py.readthedocs.io/en/stable/"&gt;docker&lt;/a&gt; module. It wraps all the main constructs and is straightforward to use. I've been leveraging it for a while now.&lt;/p&gt;

&lt;p&gt;The library interacts with the docker daemon REST API. The daemon's command interfaces use HTTP verbs on resource URLs to transfer JSON data. For example: listing containers is a GET to &lt;code&gt;/containers/json&lt;/code&gt;, creating a volume is a POST to &lt;code&gt;/volumes/create&lt;/code&gt;, etc.&lt;/p&gt;

&lt;p&gt;If you're interested, visit the &lt;a href="https://docs.docker.com/engine/api/latest/"&gt;Docker API Reference&lt;/a&gt; for more details.&lt;/p&gt;

&lt;h3&gt;
  
  
  APIClient vs DockerClient
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;docker&lt;/code&gt; module itself exposes two "layers" of communication with the daemon. They manifest as different client classes: &lt;code&gt;APIClient&lt;/code&gt; and &lt;code&gt;DockerClient&lt;/code&gt;. The former is a lower-level wrapper around the interface endpoints directly, while the latter is an object-oriented layer of abstraction on top of that client.&lt;/p&gt;

&lt;p&gt;For our purpose today, we'll be able to stick with instances of &lt;code&gt;DockerClient&lt;/code&gt; to perform all operations. Going to the lower-level is rare, but sometimes required. Both interfaces are well documented in the link shared earlier.&lt;/p&gt;

&lt;p&gt;Creating a client is very simple. After installing the module with &lt;code&gt;pip install docker&lt;/code&gt;, you can import the client class and instantiate it without any parameters. By default it will connect to the unix socket mentioned earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;docker&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DockerClient&lt;/span&gt;
&lt;span class="n"&gt;dock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DockerClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;DockerClient&lt;/code&gt; class follows a general "client.resource.command" architecture that makes it intuitive to use. For example: you can list containers with &lt;code&gt;client.containers.list()&lt;/code&gt;, or view image details with &lt;code&gt;client.images.get('python:3-slim')&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Each resource object has methods for generic actions like &lt;code&gt;list()&lt;/code&gt;, &lt;code&gt;create()&lt;/code&gt; or &lt;code&gt;get()&lt;/code&gt;, as well as those specific to the resource itself like &lt;code&gt;exec_run()&lt;/code&gt; for containers.&lt;/p&gt;

&lt;p&gt;All attributes are available with the &lt;code&gt;.attrs&lt;/code&gt; property in the form of a dictionary, and the &lt;code&gt;reload()&lt;/code&gt; method fetches the latest information on a resource and refreshes the instance.&lt;/p&gt;

&lt;p&gt;To accomplish our goals, you'll need to create a service for each build, find the node where its task executes, make some changes to the container inside that task and start the container. &lt;/p&gt;

&lt;h2&gt;
  
  
  The execution script
&lt;/h2&gt;

&lt;p&gt;Configuring a swarm service that executes a build is only half the battle. The other half is writing the code that follows the directives we defined in an &lt;a href="https://tryexceptpass.org/article/continuous-builds-parsing-specs/"&gt;earlier chapter&lt;/a&gt; to build, test, record results and distribute artifacts. This is what I call the execution script.&lt;/p&gt;

&lt;p&gt;We'll discuss specifics on how the script itself works in future articles. For now it's sufficient to know that it's the command that each build container runs whenever it starts. This is relevant because it brings up another issue we have to handle: the execution script is written in Python, but the build container is not required to have Python installed.&lt;/p&gt;

&lt;p&gt;I've designed and implemented continuous integration systems that require Python to execute, as well as those that don't. One adds complexity and constraints to the developers using the system, the other to the maintainers of the system.&lt;/p&gt;

&lt;p&gt;If it's a known fact that you'll only ever build Python code, then this doesn't really matter because the docker images used in the code repositories being built already have Python installed.&lt;/p&gt;

&lt;p&gt;If this is not the case, then you may see a substantial increase in build times, complexitiy and maybe even a limitation in supported images because the developers have to install Python into the container as part of their build.&lt;/p&gt;

&lt;p&gt;My choice these days is to package the execution script such that it's runnable inside any docker image. I documented my attempts in a previous article about &lt;a href="https://tryexceptpass.org/article/package-python-as-executable/"&gt;packaging Python modules as executables&lt;/a&gt;, where I concluded on using PyInstaller to do the job. Details on how to produce a package are included there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building and Executing Tests
&lt;/h2&gt;

&lt;p&gt;With all the ingredients at hand, it's time to dive into the code that makes a new service and runs the build. As defined in earlier chapters of this series, the following steps assume that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The repository being built has a YAML file in its root directory with the build directives.&lt;/li&gt;
&lt;li&gt;This configuration file contains an &lt;code&gt;image&lt;/code&gt; directive defining the Docker image to use with the build.&lt;/li&gt;
&lt;li&gt;There's an &lt;code&gt;environment&lt;/code&gt; directive as well where the user can set environment variables.&lt;/li&gt;
&lt;li&gt;The webhook functions handling build events have retrieved build configuration info and stored it in a dictionary called &lt;code&gt;config&lt;/code&gt; and provide pull request info in a &lt;code&gt;pr&lt;/code&gt; dict.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating the service
&lt;/h3&gt;

&lt;p&gt;When making the service, you need to consider what to name it, and how to find it when searching the swarm. &lt;/p&gt;

&lt;p&gt;Service names must be unique and in order to simplify infrastructure maintenance, they should be descriptive. I prefer to use a suffix of &lt;code&gt;-{repo_owner}-{repo_name}-{pr_number}-{timestamp}&lt;/code&gt;. There's also string size limits in these names, so careful not to get too creative.&lt;/p&gt;

&lt;p&gt;Because you don't want duplicate builds of the same pull request, you also need the ability to programmatically search the swarm for a running build. In other words, if I'm executing a build for a given pull request, there's no point in allowing the build to complete if I committed new code in that same PR. Not only are you wasting resources, but even if the build finishes successfully, it's useless because it's already out of date.&lt;/p&gt;

&lt;p&gt;To handle this situation, I use &lt;code&gt;labels&lt;/code&gt;. A service can have one or multiple labels with metadata about what it is and what it's doing. The Docker API also provides methods for filtering based on this metadata.&lt;/p&gt;

&lt;p&gt;The code that follows leverages those functions to determine whether a build is already running and stops it before creating a new service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;logging&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;docker&lt;/span&gt;

&lt;span class="p"&gt;...&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;docker_node_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;

    &lt;span class="c1"&gt;# Get environment variables defined in the config
&lt;/span&gt;    &lt;span class="n"&gt;environment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'environment'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="s"&gt;'environment'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nb"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'environment'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="c1"&gt;# Get network ports definition from the config
&lt;/span&gt;    &lt;span class="n"&gt;ports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;EndpointSpec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ports&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'ports'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="s"&gt;'ports'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nb"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'ports'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;

    &lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_INSTALL_ID'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;forge&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;install_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_ACTION'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'execute'&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_PULL_REQUEST'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_OWNER'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_REPO'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_SHA'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_STATUS_URL'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'statuses_url'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_COMMIT_COUNT'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'commits'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f'Container environment&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Connect to the Docker daemon
&lt;/span&gt;    &lt;span class="n"&gt;dock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DockerClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Stop any builds already running on the same pr
&lt;/span&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;dock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;'label'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;f"forge.repo=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;f"forge.pull_request=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]}):&lt;/span&gt;
        &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f"Found service &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; already running for this PR"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Remove the service
&lt;/span&gt;        &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Create the execution service
&lt;/span&gt;    &lt;span class="n"&gt;service_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;f"forge-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'%Y%m%dT%H%M%S'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f"Creating execution service &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;service_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;..."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'image'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;f"/forgexec"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;service_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;f'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;()],&lt;/span&gt;
        &lt;span class="n"&gt;restart_policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RestartPolicy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'none'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;'forge.repo'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;f'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;'forge.pull_request'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that before creating the service, we're not only grabbing the environment variables defined in the build config, but also adding extras that describe the action we're taking. They pass relevant information about the repository and pull request being built onto the execution script.&lt;/p&gt;

&lt;p&gt;As you can see, we're using &lt;code&gt;.services.list()&lt;/code&gt; to get a list of services currently running in the swarm filtered with the labels we described earlier. If a service exists, calling &lt;code&gt;service.remove()&lt;/code&gt; will also get rid of its containers.&lt;/p&gt;

&lt;p&gt;Creating the service is a call to &lt;code&gt;.services.create()&lt;/code&gt; where we pass:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The container image that the service is based on - defined in our build config.&lt;/li&gt;
&lt;li&gt;A command to execute when it starts, which is the name of our execution script - &lt;code&gt;forgexec&lt;/code&gt; in this case.&lt;/li&gt;
&lt;li&gt;The service name.&lt;/li&gt;
&lt;li&gt;Environment variables are defined as a list of strings formatted as &lt;code&gt;NAME=VALUE&lt;/code&gt;, so we convert them from our environment dict using a list comprehension.&lt;/li&gt;
&lt;li&gt;The restart policy is the term that Docker uses to define what to do with containers in the event of a host restart. In our case, we don't want them to automatically come online, so we set it to &lt;code&gt;none&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The labels with the metadata we described earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Getting task information
&lt;/h3&gt;

&lt;p&gt;Once the swarm creates the service, it takes a few seconds before it initializes its task and provisions the node and container that runs it. So we wait until its available by checking for the total number of tasks assigned to the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;    &lt;span class="c1"&gt;# Wait for service, task and container to initialize
&lt;/span&gt;    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="s"&gt;'ContainerStatus'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Status'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="s"&gt;'ContainerID'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Status'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'ContainerStatus'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There's only one task in a build service, so we can always assume it's the first one in the list. Each &lt;code&gt;task&lt;/code&gt; is a dictionary with attributes describing the container that runs it.&lt;/p&gt;

&lt;p&gt;There are two delays here: one before the task is assigned and one before a container spins up for the task. So it's necessary to wait until container information is available within the task details before continuing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Copying the execution script into the container
&lt;/h3&gt;

&lt;p&gt;As discussed earlier, you need to copy our packaged execution script into each build container in order for it to start - the equivalent of a &lt;code&gt;docker cp&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Copying data into a container requires the files to be tar'd and compressed, so at the very beginning of our event server script we create a &lt;code&gt;tar.gz&lt;/code&gt; file using the &lt;code&gt;tarfile&lt;/code&gt; Python module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;'__main__'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Setup the execution script tarfile that copies into containers
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tarfile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'forgexec.tar.gz'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'w:gz'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'forgexec.dist/forgexec'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'forgexec'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This means we have a &lt;code&gt;forgexec.tar.gz&lt;/code&gt; file available to transfer with the &lt;code&gt;container.put_archive()&lt;/code&gt; function that the docker module provides. Do this every time the webhook event server starts and override any existing file to make sure that you're not using stale code.&lt;/p&gt;

&lt;p&gt;Transferring files into a container requires us to connect to the swarm node directly. There's no interface at the docker service level to help us do that. This is why we had to enable remote access earlier.&lt;/p&gt;

&lt;p&gt;First we get information about the docker node from the task and then we make a new &lt;code&gt;DockerClient&lt;/code&gt; instance that connects to the node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;    &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'NodeID'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;nodeclient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DockerClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Description'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'Hostname'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;docker_node_port&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This time, the instantiation uses the tcp port in which the nodes are listening (configured during cluster setup) and the hostname of the node. Depending on your network and DNS setup, you may want to use the &lt;code&gt;socket&lt;/code&gt; module to help with domain name resolution. Something like &lt;code&gt;socket.gethostbyname(node.attrs['Description']['Hostname'])&lt;/code&gt; might be good enough.&lt;/p&gt;

&lt;p&gt;At this point we can directly access the container, copy the file into it and start it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;    &lt;span class="c1"&gt;# Get container object
&lt;/span&gt;    &lt;span class="n"&gt;container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nodeclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Status'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'ContainerStatus'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'ContainerID'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="c1"&gt;# Copy the file
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nb"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'forgexec.tar.gz'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'rb'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;put_archive&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'/'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

    &lt;span class="c1"&gt;# Start the container
&lt;/span&gt;    &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Putting it together
&lt;/h2&gt;

&lt;p&gt;Here's our new &lt;code&gt;execute()&lt;/code&gt; function merged with code from the &lt;a href="https://dev.to/aricle/continuous-builds-webhooks/"&gt;webhook event handling&lt;/a&gt; chapter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;docker_node_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="s"&gt;"""Kick off .forge.yml test actions inside a docker container"""&lt;/span&gt;

    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f"Attempting to run &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="s"&gt;''&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; tests for PR #&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;owner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'head'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'repo'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'owner'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'login'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;repo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'head'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'repo'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'name'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;sha&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'head'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'sha'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Select the forge for this user
&lt;/span&gt;    &lt;span class="n"&gt;forge&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;forges&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Get build info
&lt;/span&gt;    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_build_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'image'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'execute'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Unable to find or parse the .forge.yml configuration'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;

    &lt;span class="c1"&gt;# Get environment variables defined in the config
&lt;/span&gt;    &lt;span class="n"&gt;environment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'environment'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="s"&gt;'environment'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nb"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'environment'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="c1"&gt;# Get network ports definition from the config
&lt;/span&gt;    &lt;span class="n"&gt;ports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;EndpointSpec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ports&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'ports'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="s"&gt;'ports'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nb"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'ports'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;

    &lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_INSTALL_ID'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;forge&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;install_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_ACTION'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'execute'&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_PULL_REQUEST'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_OWNER'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_REPO'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_SHA'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_STATUS_URL'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'statuses_url'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="s"&gt;'FORGE_COMMIT_COUNT'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'commits'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f'Container environment&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Connect to the Docker daemon
&lt;/span&gt;    &lt;span class="n"&gt;dock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DockerClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;docker_host&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Stop any builds already running on the same pr
&lt;/span&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;dock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;'label'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;f"forge.repo=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;f"forge.pull_request=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]}):&lt;/span&gt;
        &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f"Found service &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; already running for this PR"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Remove the service
&lt;/span&gt;        &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Create the execution service
&lt;/span&gt;    &lt;span class="n"&gt;service_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;f"forge-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'%Y%m%dT%H%M%S'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f"Creating execution service &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;service_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;..."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'image'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;f"/forgexec"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;service_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;f'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;()],&lt;/span&gt;
        &lt;span class="n"&gt;restart_policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RestartPolicy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'none'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="c1"&gt;# mounts=[],
&lt;/span&gt;        &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;'forge.repo'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;f'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;owner&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;'forge.pull_request'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Wait for service, task and container to initialize
&lt;/span&gt;    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="s"&gt;'ContainerStatus'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Status'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="s"&gt;'ContainerID'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Status'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'ContainerStatus'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'NodeID'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;nodeclient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DockerClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;f"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Description'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'Hostname'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;docker_node_port&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nodeclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Status'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'ContainerStatus'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'ContainerID'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nb"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'forgexec.tar.gz'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'rb'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;put_archive&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'/'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

    &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;This article gave you the details needed to use Docker Swarm for provisioning compute that builds code inside a cluster. Along with the previous chapters on handling repository events and defining the directives required to execute a build, you're ready to move on to the next piece that covers the execution script itself. You'll need to configure the build environment, run commands and execute tests for the different stages in the pipeline.&lt;/p&gt;




&lt;h2&gt;
  
  
  Learn More!
&lt;/h2&gt;

&lt;p&gt;Subscribe to the &lt;a href="https://tinyurl.com/tryexceptpass-signup"&gt;tryexceptpass.org mailing list&lt;/a&gt; for more content about Python, Docker, open source, and our experiences with enterprise software engineering.&lt;/p&gt;

</description>
      <category>python</category>
      <category>docker</category>
      <category>ci</category>
      <category>continuousdelivery</category>
    </item>
    <item>
      <title>Designing Continuous Build Systems: Handling Webhooks with Sanic</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Sat, 10 Aug 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/designing-continuous-build-systems-handling-webhooks-with-sanic-4120</link>
      <guid>https://dev.to/tryexceptpass/designing-continuous-build-systems-handling-webhooks-with-sanic-4120</guid>
      <description>&lt;p&gt;After covering how to &lt;a href="https://dev.to/tryexceptpass/designing-continuous-build-systems-6d9"&gt;design a build pipeline&lt;/a&gt; and &lt;a href="https://dev.to/tryexceptpass/designing-continuous-build-systems-parsing-the-specification-2b83"&gt;define build directives&lt;/a&gt; in the continuous builds series, it’s time to look at handling events from a code repository.&lt;/p&gt;

&lt;p&gt;As internet standards evolved over the years, the HTTP protocol has become more prevalent. It’s easier to route, simpler to implement and even more reliable. This ubiquity makes it easier for applications that traverse or live on the public internet to communicate with each other. As a result of this, the idea of webhooks came to be as an “event-over-http” mechanism.&lt;/p&gt;

&lt;p&gt;With GitHub as the repository management platform, we have the advantage of using their webhook system to communicate user actions over the internet and into our build pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tryexceptpass.org/article/continuous-builds-webhooks/"&gt;Read On ...&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>github</category>
      <category>ci</category>
      <category>continuousdelivery</category>
    </item>
    <item>
      <title>4 Attempts at Packaging Python as an Executable</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Sun, 28 Jul 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/4-attempts-at-packaging-python-as-an-executable-45fc</link>
      <guid>https://dev.to/tryexceptpass/4-attempts-at-packaging-python-as-an-executable-45fc</guid>
      <description>&lt;p&gt;A few years back I researched how to create a single-file executable of a Python application. Back then, the goal was to make a desktop interface that included other files and binaries in one bundle. Using PyInstaller I built a single binary file that could execute across platforms and looked just like any other application.&lt;/p&gt;

&lt;p&gt;Fast forward until today and I have a similar need, but a different use case. I want to run Python code inside a Docker container, but the container image cannot require a Python installation.&lt;/p&gt;

&lt;p&gt;Instead of blindly repeating what I tried last time, I decided to investigate more alternatives and discuss them here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tryexceptpass.org/article/package-python-as-executable/"&gt;Read On ...&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>packaging</category>
      <category>docker</category>
      <category>build</category>
    </item>
    <item>
      <title>Designing Continuous Build Systems - Parsing the Specification</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Mon, 15 Jul 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/designing-continuous-build-systems-parsing-the-specification-2b83</link>
      <guid>https://dev.to/tryexceptpass/designing-continuous-build-systems-parsing-the-specification-2b83</guid>
      <description>&lt;p&gt;Every code repository is different. The execution environment, the framework, the deliverables, or even the linters, all need some sort of customization. Creating a flexible build system requires a mechanism that specifies the steps to follow at different stages of a pipeline.As the next chapter in the series Designing Continuous Build Systems, this article examines which instructions you’ll want to convey to your custom system and how to parse them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tryexceptpass.org/article/continuous-builds-parsing-specs/"&gt;Read On ...&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ci</category>
      <category>continuousdelivery</category>
      <category>python</category>
      <category>docker</category>
    </item>
    <item>
      <title>PyCaribbean 2018 - Sofi Unity3d Lightning Talk</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Fri, 05 Jul 2019 04:40:03 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/pycaribbean-2018-sofi-unity3d-lightning-talk-5be3</link>
      <guid>https://dev.to/tryexceptpass/pycaribbean-2018-sofi-unity3d-lightning-talk-5be3</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/RV0CiNY6hDc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;PyCaribbean 2018 lightning talk where I demo &lt;a href="https://github.com/tryexceptpass/sofi-unity3d"&gt;sofi-unity3d&lt;/a&gt; from a live Python interpreter. The talk starts at &lt;a href="https://www.youtube.com/watch?v=RV0CiNY6hDc&amp;amp;feature=youtu.be&amp;amp;t=1123"&gt;18:43&lt;/a&gt; and shows how to spawn, manipulate and animate objects in 3d space using Unity3D.&lt;/p&gt;

</description>
      <category>python</category>
      <category>gui</category>
      <category>unity3d</category>
      <category>pycaribbean</category>
    </item>
    <item>
      <title>PyCaribbean 2018 - Practicality Beats Purity</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Fri, 05 Jul 2019 04:28:32 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/pycaribbean-2018-practicality-beats-purity-3l93</link>
      <guid>https://dev.to/tryexceptpass/pycaribbean-2018-practicality-beats-purity-3l93</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Ba2y1IOLiPw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>python</category>
      <category>gui</category>
      <category>unity3d</category>
      <category>pycaribbean</category>
    </item>
    <item>
      <title>Running Enterprise Builds in the Cloud</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Fri, 28 Jun 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/running-enterprise-builds-in-the-cloud-4fjp</link>
      <guid>https://dev.to/tryexceptpass/running-enterprise-builds-in-the-cloud-4fjp</guid>
      <description>&lt;p&gt;There are many solutions to building code, some of them are available as cloud services, others run on your own infrastructure, on private clouds or all of the above. They make it easy to create custom pipelines, as well as simple testing and packaging solutions. Some even offer open source feature-limited “community editions” to download and run on-premises for free.&lt;/p&gt;

&lt;p&gt;Following is my experience on the important aspects to consider when deciding on a build system for your organization that depends on cloud services. The original intent was to include a detailed review of various online solutions, but I decided to leave that for another time. Instead, I’m speaking more from an enterprise viewpoint, which is closer to reality in a large organization than the usual how-to’s. We’re here to discuss the implications and the practicality of doing so.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tryexceptpass.org/article/continuous-builds-2/"&gt;Read on ...&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>ci</category>
      <category>continuousdelivery</category>
    </item>
    <item>
      <title>Designing Continuous Build Systems</title>
      <dc:creator>Cristian Medina</dc:creator>
      <pubDate>Wed, 24 Apr 2019 04:00:00 +0000</pubDate>
      <link>https://dev.to/tryexceptpass/designing-continuous-build-systems-6d9</link>
      <guid>https://dev.to/tryexceptpass/designing-continuous-build-systems-6d9</guid>
      <description>&lt;p&gt;Continuous integration and delivery is finally becoming a common goal for teams of all sizes. After building a couple of these systems at small and medium scales, I wanted to write down ideas, design choices and lessons learned. This post is the first in a series that explores the design of a custom build system created around common development workflows, using off-the-shelf components where possible. You’ll get an understanding of the basic components, how they interact, and maybe an open source project with example code from which to start your own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tryexceptpass.org/article/continuous-builds-1/"&gt;Read on ...&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ci</category>
      <category>engineering</category>
      <category>continuousdelivery</category>
    </item>
  </channel>
</rss>
