<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Devin H</title>
    <description>The latest articles on DEV Community by Devin H (@dhandspikerwade).</description>
    <link>https://dev.to/dhandspikerwade</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dhandspikerwade"/>
    <language>en</language>
    <item>
      <title>Duct tape enough services together and you can cache APT packages</title>
      <dc:creator>Devin H</dc:creator>
      <pubDate>Mon, 23 Mar 2026 03:03:36 +0000</pubDate>
      <link>https://dev.to/dhandspikerwade/duct-tape-enough-services-together-and-you-can-cache-apt-packages-2iml</link>
      <guid>https://dev.to/dhandspikerwade/duct-tape-enough-services-together-and-you-can-cache-apt-packages-2iml</guid>
      <description>&lt;p&gt;Due to the current job market, I’ve had a bit more free time on my hands than usual so I have been spending more time on personal projects. Putting more effort into those projects sitting forever on the "I'll fix that eventually" list. The small kind of things that come up enough to notice but not important enough to focus on. &lt;/p&gt;

&lt;p&gt;One of those things was the APT cache I’ve been running. It generally works, but every so often it just randomly starts returning HTTP 500 errors without any useful logs and usually fixed by restarting it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How hard could it be?
&lt;/h2&gt;

&lt;p&gt;While spending a couple hours looking into it, not making much progress, I saw that I am not the only person having reliability issues with &lt;code&gt;apt-cacher-ng&lt;/code&gt;. Scrolling through the Debian bug tracker, the application having errors caused by race conditions, incomplete downloads, and corrupted files seems to have become a frequent issue&lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1022043" rel="noopener noreferrer"&gt;¹&lt;/a&gt;&lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1121610#63" rel="noopener noreferrer"&gt;²&lt;/a&gt;&lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1022043" rel="noopener noreferrer"&gt;³&lt;/a&gt;&lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1042807" rel="noopener noreferrer"&gt;⁴&lt;/a&gt;&lt;a href="https://www.reddit.com/r/debian/comments/klebea/psa_aptcacherng_is_a_buggy_pile_of_shit/" rel="noopener noreferrer"&gt;⁵&lt;/a&gt; in the last couple years with similar sounding issues going back &lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=720167" rel="noopener noreferrer"&gt;over a decade&lt;/a&gt;. There are a few other alternatives, but most seemed either have become unmaintained or have similar complaints.&lt;/p&gt;

&lt;p&gt;I ended up wondering why I was using it in the first place. APT repositories are just HTTP file servers, and APT already supports proxies natively. It didn’t seem like something that should require a custom piece of software. Being a web developer, my first instinct was to reach for Nginx. It already handles caching extremely well, and APT repositories are largely just static unsecured* files. It seemed like a reasonable fit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://xkcd.com/356" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdr7mftgf27dchy5b3bn8.png" alt="XKCD " width="740" height="371"&gt;&lt;/a&gt;&lt;br&gt;
The goal changed from fixing my issues to finding a simpler solution using off-the-shelf Nginx container. My use case was quite simple, I just needed a pull-through cache so I wasn't downloading the same packages multiple times. I was mostly happy with apt-cacher-ng, didn't need any new features, but wanted something more reliable. I'm not running a business with it, the setup can be a bit jank as long as it stays running smoothly. &lt;/p&gt;

&lt;p&gt;So I set myself some goals before starting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can be used as a drop-in as a replacement for a existing apt-cacher-ng container because I don't want change how my machines are already configured.&lt;/li&gt;
&lt;li&gt;No custom software, use existing projects. I want a cache, not a whole new piece of software to maintain. &lt;/li&gt;
&lt;li&gt;Support both Debian and Ubuntu; assuming if they work, other distributions that use APT are likely to work as well. &lt;/li&gt;
&lt;li&gt;Aggregate the various Debian mirrors into a single cache bucket to avoid duplication.&lt;/li&gt;
&lt;li&gt;Be able to cache other non-standard package repositories such as updates for Proxmox.&lt;/li&gt;
&lt;li&gt;Cannot require mirroring the whole repositories. &lt;a href="https://packages.debian.org/stable/debmirror" rel="noopener noreferrer"&gt;debmirror&lt;/a&gt; is technically always a solution but I don't consider it a &lt;em&gt;reasonable&lt;/em&gt; one.&lt;/li&gt;
&lt;li&gt;Packages should naturally fall out of a full cache or expire over time. &lt;/li&gt;
&lt;li&gt;Behave transparently to the system using it. Machines shouldn't need to know or care if their request was cached or not.
&lt;/li&gt;
&lt;li&gt;Support &lt;code&gt;auto-apt-proxy&lt;/code&gt; so laptops and CI pipelines which can't use a fixed configuration can auto-magically benefit from the cache. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why would you want or need an APT cache?
&lt;/h2&gt;

&lt;p&gt;Okay, let's step back for a second, why am I bothering with this? There's two ways to look at for reasons. There is the feel good philosophical reasons and there's the practical self-serving reasons. &lt;/p&gt;

&lt;p&gt;Part of the positive motivation here is just being considerate of the infrastructure these distributions rely on. Debian, like many open source project, are largely supported by volunteers and donations. A local cache helps reduce the burden on these services when updating multiple systems or while repetitively installing the same packages as part of a CI/CD pipeline.  Even though individual packages are small, it can add up quickly. Basically be kind to services being provided for free. &lt;/p&gt;

&lt;p&gt;For the more selfish reasons, the servers used when connecting to &lt;code&gt;deb.debian.org&lt;/code&gt; have never been particularly fast for me. Even on a gigabit connection, I tend to get speeds in the sub-megabit range. Since I run Debian across a few machines as well as in virtual environments for testing and development, having a local cache means that once a package is downloaded the first time, it can be served at full speed for every further request. Updates are considerably faster. It also has a side effect of making the net-installer surprisingly fast compared to the full size DVD installer when it comes time to do major version upgrades. &lt;/p&gt;

&lt;h2&gt;
  
  
  Nginx is great, this should be quick and easy
&lt;/h2&gt;

&lt;p&gt;Nginx handled most of what I needed out of the box. There’s an official container image, it can listen on the same port as apt-cacher-ng, and its caching system is well documented and understood. I've used it for much more ridiculous things throughout my career. &lt;/p&gt;

&lt;p&gt;Putting together the initial configuration didn’t take long. A couple of &lt;code&gt;server&lt;/code&gt; blocks for the known mirrors and a catch-all to handle the non-standard repositories with &lt;code&gt;proxy_pass&lt;/code&gt; sending traffic where it needs to go. Add some caching rules to aggregate the Debian mirrors and I've got enough to have something working. I start testing with copies of Debian and Ubuntu, other than a few easily solvable issues like some files needing to be excluded from caching and HTTP 206 being cached incorrectly causing checksum failures, it was smooth sailing. Ready to call it done in an afternoon. &lt;/p&gt;

&lt;p&gt;Feeling confident in what I had done, switched my desktop over to using the new cache for a more realistic test. It immediately fails trying to do &lt;code&gt;apt update&lt;/code&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Hmm...maybe it's not all HTTP
&lt;/h2&gt;

&lt;p&gt;You may have noticed that there was an asterisk next to "unsecured files". There's a reason for that. APT repositories are mostly served over HTTP, Debian and Ubuntu are by default. The lack of HTTPS is not usually an issue. APT, being a creation of the 1990s, makes no assumptions of the transport medium and instead verifies hashes and that the package is signed by a trusted GPG key.&lt;/p&gt;

&lt;p&gt;However in a world that is increasingly SSL by default, many third-party repositories are HTTPS. My previous tests were using clean installs of the systems, but trying to test using my desktop, the more messy nature of the real world was introduced. Notably Microsoft, Mozilla, and Docker repositories that use HTTPS. &lt;/p&gt;

&lt;p&gt;As a general rule, HTTPS/SSL traffic is a blackbox that you cannot interfere with. I could add extra configuration to handle HTTP-to-HTTPS conversion specifically for these repositories. However I wanted to the proxy to be transparent without needing the client system being aware caching. Having to change repository settings on systems to replace HTTPS URLs with HTTP ones is not transparent. It would also make unknown repositories unusable until the container is updated. &lt;/p&gt;

&lt;p&gt;Luckily, APT handles proxied HTTPS connections by first sending a CONNECT request over HTTP which makes it possible to identify where the traffic needs to go without needing to decrypt it. This should allow me to route the traffic properly. Unfortunately, Nginx doesn’t support handling those requests without additional modules that are only available in the paid version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hello HAProxy
&lt;/h2&gt;

&lt;p&gt;To work around the HTTPS limitations, I added HAProxy. The idea was to let it handle routing the requests to either Nginx for caching or forward them along to the requested destination as needed. This idea based on a misunderstanding of the documentation causing me to believe that there was support for being a forward proxy by setting some request variables. &lt;/p&gt;

&lt;p&gt;This mistake cost me a couple hours of frustration trying to debug a service that was never designed to work.  It wasn't all for nothing though. Besides being able to use ACLs to split the traffic more cleanly, the HAProxy dashboard gave me more visibility into the traffic compared to the relatively bare Nginx status page. &lt;/p&gt;

&lt;p&gt;I could now route the requests more effectively, but I was still no closer to HTTPS working. &lt;/p&gt;

&lt;h2&gt;
  
  
  TinyProxy to the rescue
&lt;/h2&gt;

&lt;p&gt;At this point, I remembered SteamCache (now called LANCache) from the 2010s, which was a project caching Steam game downloads for LAN events. It was using Nginx, similar to what I am attempting to do so I decided take a look into how they were handling SSL. They use "SNI proxy" as they MITM requests directly using DNS replacement. Unfortunately, it was useless to me try to handle proxy requests.&lt;/p&gt;

&lt;p&gt;This left me at a bit of a dead end. It was time to take step back and do something I should have done long ago instead of fighting with HAProxy, just Google it. &lt;/p&gt;

&lt;p&gt;The first result led me to &lt;a href="https://tinyproxy.github.io/" rel="noopener noreferrer"&gt;TinyProxy&lt;/a&gt;which was the exactly what I needed. It’s a small, proxy server that handles forwarding HTTPS requests, requiring almost zero configuration, and has on-going maintenance. Adding it to the container and updating HAProxy to pass the appropriate traffic to it filled in the missing piece. It would handle HTTPS traffic while Nginx continued to handle caching.&lt;/p&gt;

&lt;p&gt;After that, everything started behaving as expected. Testing with Debian and Ubuntu worked without issue, my desktop updated with both HTTP and HTTPS repositories flawlessly. &lt;/p&gt;

&lt;h2&gt;
  
  
  Quick pit-stop for auto-apt-proxy
&lt;/h2&gt;

&lt;p&gt;I did not forget about wanting support for auto-apt-proxy. It turns out the detection method is quite simple. Being an open source tool, I was able to look into how it finds proxies. Proxies are detected by looping through a list of known endpoints, making a request for the root, and then searching for the string "Apt-cache".&lt;/p&gt;

&lt;p&gt;By already having a index page with the config line needed to connect, I was already supporting detection because I had been jokingly referring to this project as "Apt-cacher-dt" or Apt Cacher Duct Tape.  I had unknowingly added support.  &lt;/p&gt;

&lt;h2&gt;
  
  
  So what have I done?
&lt;/h2&gt;

&lt;p&gt;This was a silly little project to fix a personal gripe that ended up taking three different proxy servers, slapping them together into a container to create a proxy. It's not going to win any prizes for efficiency but it does what I set out to do. It has been running flawless for a couple days now. &lt;/p&gt;

&lt;p&gt;It was a bit more involved than I originally expected, however it gave me a better understanding of how proxies interact and how APT distributes software. It was a fun detour that gave me something to write about, something else I have been looking give a try. &lt;/p&gt;

&lt;p&gt;I've posted the source to &lt;a href="https://github.com/DHandspikerWade/apt-cacher-dt" rel="noopener noreferrer"&gt;Github&lt;/a&gt; along with the container image, if anyone would like to take a look or give it a try. &lt;/p&gt;

</description>
      <category>docker</category>
      <category>debian</category>
      <category>ubuntu</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Benefits of a Throwaway Environment</title>
      <dc:creator>Devin H</dc:creator>
      <pubDate>Fri, 05 Jun 2020 13:40:47 +0000</pubDate>
      <link>https://dev.to/dhandspikerwade/benefits-of-a-throwaway-environment-3i0p</link>
      <guid>https://dev.to/dhandspikerwade/benefits-of-a-throwaway-environment-3i0p</guid>
      <description>&lt;p&gt;The state of a developer's workstation can very easily be summarized as "works on my machine" - something different is installed or configured differently and I don't know what or how it got that way. Every machine is personal. Some try to minimize this by using Ansible or Chef to create the ability to spin up their environment cleanly. However at the end of the day, we're developers, sometimes we need to just try something and see what happens. Sometimes trying new things creates a mess that never really gets swept up.&lt;/p&gt;

&lt;p&gt;This is where Docker comes in handy! For the same reason that it's great for immutable deployments, it's great for creating an environment to play in and then toss away once you're done - with very little overhead. A container will only leak the changes that you tell it to. Only changes to the files in a mounted volume will be preserved. &lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD
&lt;/h3&gt;

&lt;p&gt;If you are using a CI/CD system like GitLab CI, Bitbucket Pipelines, or CircleCI, you may already be using a throwaway environment without realizing it. Continuous integration relies on the ability to run multiple builds simultaneously that may have differing requirements and toolchains. Containers and virtual machines are often used to isolate those build processes while allowing the development team to install any needed tools without affecting another team's workflow. We wouldn't want a build to start failing randomly just because another team decided to try the newest NodeJS beta.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why would you use it?
&lt;/h3&gt;

&lt;p&gt;This all sounds great right, but how do you use it in practice? You use it to experiment! Whether that is trying your app in the newest NodeJS beta, testing a script in a sandbox, or updating up an outdated library. All without affecting your workstation permanently; if anything goes wrong or not as you wanted to, delete the container and start again. No need to break your already working environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reusability
&lt;/h3&gt;

&lt;p&gt;While it's great to have the ability to have a fresh sandbox each time you want to try something, sometimes you want some tools pre-installed. Using the same process that is used to create immutable deployments, we are able to create images that include tools you commonly use. For example, I personally have an image that is able to build different PHP versions so I that I can test different configs across versions. This can be done by finding a pre-made image on DockerHub or creating your own via a Dockerfile. If you are interested in creating your own, I'd recommend heading over to the &lt;a href="https://docs.docker.com" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/photos/4gKHjKG7ty4" rel="noopener noreferrer"&gt;Ferenc Horvath&lt;/a&gt; on &lt;a href="https://unsplash.com/" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>productivity</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
