<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thomas H Jones II</title>
    <description>The latest articles on DEV Community by Thomas H Jones II (@ferricoxide).</description>
    <link>https://dev.to/ferricoxide</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ferricoxide"/>
    <language>en</language>
    <item>
      <title>Crib Notes: Removing Un-Tracked "Junk" From Git Repositories</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Tue, 06 Jan 2026 18:43:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/crib-notes-removing-un-tracked-junk-from-git-repositories-4kp9</link>
      <guid>https://dev.to/ferricoxide/crib-notes-removing-un-tracked-junk-from-git-repositories-4kp9</guid>
      <description>&lt;p&gt;I have a couple of git-managed projects where the projects' CI-configuration takes documentation-inputs — usually Markdown files — and renders those inputs into other formats (usually HTML for hosting on platforms like &lt;a href="https://about.readthedocs.com" rel="noopener noreferrer"&gt;Read The Docs&lt;/a&gt;. While the &lt;em&gt;documentation-inputs&lt;/em&gt; are tracked in git, the rendered outputs are not tracked.&lt;/p&gt;

&lt;p&gt;Indeed, they're normally not even generated in contributors' local copies of the GitHub- or GitLab-hosted repositories. At best, the projects are adequately Docker-enabled so as to make it easy to generate "preview" renderings (to save on uploading documentation-updates that have errors and saving the time and resources lost to server-side rendering of the contents).&lt;/p&gt;

&lt;p&gt;If one &lt;em&gt;does&lt;/em&gt; avail themselves of the "preview" capability, it can leave grumph in the local repository copies. This grumph can lead to non-representative (i.e., "stale") content being previewed. To avoid this, one generally wants to ensure that such "preview" content is cleaned up before the next (local) generation of "preview" content is performed.&lt;/p&gt;

&lt;p&gt;The git client provides a nifty little method for performing cleanups of such content. This method is in the form of &lt;code&gt;git clean&lt;/code&gt;. Unfortunately, running &lt;em&gt;just&lt;/em&gt; &lt;code&gt;git clean&lt;/code&gt; typically won't result in the desired results. One needs to add further flags to it. I've found that, for my use-cases, the most-appropriate/thorough invocation is via &lt;code&gt;git clean -fdx&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;This is also useful if, in the course of doing updating a repository — say, as part of a significant refactor — you find you've done a number of &lt;code&gt;mv &amp;amp;lt;DIR&amp;amp;gt;{,-OLD}&lt;/code&gt; types of operations (not exactly the "school" method to underpin refactors, but provides an easy path for "before/after" comparisons). Such directories and similar content will also get wiped away by &lt;code&gt;git clean -fdx&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>cleanup</category>
      <category>git</category>
      <category>stale</category>
    </item>
    <item>
      <title>WSL Space Recovery</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Wed, 20 Aug 2025 16:43:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/wsl-space-recovery-31dg</link>
      <guid>https://dev.to/ferricoxide/wsl-space-recovery-31dg</guid>
      <description>&lt;p&gt;Recently, the backup client I use for my laptop started popping up, "operation aborted: out of space" messages. I'd been confused because, the last time I'd looked — just a few days prior — I had over 100GiB of free space in my boot-drive. Yet, when I looked at my disk-utilization, I only had a few tens of MiB free. &lt;strong&gt;&lt;em&gt;WHAT??&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, I began the process of tracking down what was suddenly chewing up all my disk-space. Ultimately, I found a nearly 70GiB "ext4.vhdx" buried &lt;em&gt;deep&lt;/em&gt; within my Windows home-directory's "AppData" hierarchy.&lt;/p&gt;

&lt;p&gt;Did a quick web search and found that what I was seeing was the virtual hard disk for my WSL2 instance. This confused me because I'd been pretty scrupulous in keeping my WSL2 instance's storage in check. In fact, when I checked, my WSL instances &lt;em&gt;visible&lt;/em&gt; storage was only 23GiB, nowhere near the 70GiB+ of the "ext4.vhdx" file that was backing that 23GiB of visible storage-usage.&lt;/p&gt;

&lt;p&gt;Further Googling turned up that WSL doesn't really reclaim freed storage. So, that differential between visible storage an the size of "ext4.vhdx" file was effectively wastage. Presumably my virtual hard drive was significantly sparse.&lt;/p&gt;

&lt;p&gt;Next thing I looked up was "how to reclaim wasted space in a WSL drive." Ultimately discovered that I needed to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stop my WSL instance&lt;/li&gt;
&lt;li&gt;Back up my WSL instance&lt;/li&gt;
&lt;li&gt;"Optimize" my instance's hard drive&lt;/li&gt;
&lt;li&gt;Restart my WSL instance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first step was dead easy: just fire up a command.exe (or PowerShell session), then issue a &lt;code&gt;wsl --shutdown&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Second step was also fairly easy, as it was something I was doing every few months, any way: while my backup client should be backing up the virtual hard disk, I don't trust that those backups are anything beyond "crash consistent". At any rate, I took the opportunity to do a:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsl --export &amp;lt;WSL_DISTRO&amp;gt; G:\WSL_Backups\&amp;lt;WSL_DISTRO&amp;gt;\backup-$( date '+%Y%m%d' ).tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once I found articles on the subject, the VHD compression was also pretty straight-forward (and, thus far, no need for the backups):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a PowerShell window (the optimization-command is a PowerShell commandlet) &lt;/li&gt;
&lt;li&gt;Navigate to the directory hosting the VHD file &lt;/li&gt;
&lt;li&gt;Execute &lt;code&gt;Optimize-VHD -Path .\ext4.vhdx -Mode full&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The optimization crushed the VHD file back down to a hair larger than the (internal to the instance) disk-size. &lt;/p&gt;

&lt;p&gt;Restarting my WSL instance is just going into my Search box and typing in the name of my instance, then clicking on the menu-item that appears.&lt;/p&gt;

&lt;p&gt;Situation fixed, my next problem was, "why the hell did I end up with such a huge disk-image in the first place". The answer to that didn't really come until a few days later when my VHD blew up again.&lt;/p&gt;

&lt;p&gt;I primarily use my WSL instance for work-related stuff. Recently, I'd been using podman to do some — a &lt;em&gt;bunch&lt;/em&gt;, actually — container work. Worse, some of that Podman-based container-work was resulting in Buildah images getting generated. Whenever I would run &lt;code&gt;podman system prune --all --volumes &amp;amp;&amp;amp; podman system prune --external&lt;/code&gt;, the tool would tell me I'd recovered 5-20GiB of space (particularly the &lt;code&gt;--external&lt;/code&gt; runs). &lt;/p&gt;

&lt;p&gt;Those space-recovery numbers made it occurr to me, "are my podman activities blowing my disk up?" So, after a fresh &lt;code&gt;… Optimize-VHD …&lt;/code&gt; run, I decided to see if I could intentionally provoke a "VHD is multiples of my visible-use" situation. And, yes, I could.&lt;/p&gt;

&lt;p&gt;Moral of the story: while you &lt;em&gt;can&lt;/em&gt; use WSL instances with things like Podman, doing so will likely make it so you'll need to habituate to doing more system cleanup activities.&lt;/p&gt;

</description>
      <category>podman</category>
      <category>powershell</category>
      <category>vhd</category>
      <category>wsl</category>
    </item>
    <item>
      <title>Implementing (Psuedo) Profiles in Git (Part 2!)</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Wed, 03 Jul 2024 18:58:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/implementing-psuedo-profiles-in-git-part-2-27el</link>
      <guid>https://dev.to/ferricoxide/implementing-psuedo-profiles-in-git-part-2-27el</guid>
      <description>&lt;p&gt;As noted in my first &lt;a href="https://www.blogger.com/blog/post/edit/3054063691986932274/8861509966346951193" rel="noopener noreferrer"&gt;&lt;em&gt;Implementing (Psuedo) Profiles in Git&lt;/em&gt;&lt;/a&gt; post:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I'm an automation consultant for an IT contracting company. Using git is a daily part of my work-life. … Then things started shifting, a bit. Some customers wanted me to use my corporate email address as my ID. Annoying, but not an especially big deal, by itself. Then, some wanted me to use their privately-hosted repositories and wanted me to use identities issued by them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This led me down a path of setting up multiple git "profiles" that I captured into my first article on this topic. To better support such per-project identities, it's also a good habit to use per-project authentication methods. I generally prefer to do git-over-SSH – rather than git-over-http(s) – when interfacing with remote Git repositories. Because I don't like having to keep re-entering my password, I use an SSH-agent to manage my keys. When one only has one or two projects they regularly interface with, this means a key-agent that is only needing to store a couple of authentication-keys.&lt;/p&gt;

&lt;p&gt;Unfortunately, if you have more than one key in your SSH agent, when you attempt to connect to a remote SSH service, the agent will iteratively-present keys until the remote accepts one of them. If you've got three or more keys in your agent, the agent could present 3+ keys to the remote SSH server. By itself, this isn't a problem: the remote logs the earlier-presented keys as an authentication failure, but otherwise let you go about your business. However, if the remote SSH server is hardened, it very likely will be configured to lock your account after the third authentication-failure. As such, if you've got four or more keys in your agent and the remote requires a key that your agent doesn't present in the first three autentication-attempts, you'll find your account for that remote SSH service getting locked out.&lt;/p&gt;

&lt;p&gt;What to do? Launch multiple ssh-agent instantiations.&lt;/p&gt;

&lt;p&gt;Unfortunately, without modifying the default behavior, when you invoke the ssh-agent service, it will create a (semi) randomly-named UNIX domain-socket to listen for requests on. If you've only got a single ssh-agent instance running, this is a non-problem. If you've got multiple, particularly if you're using a tool like &lt;a href="https://github.com/direnv/direnv" rel="noopener noreferrer"&gt;direnv&lt;/a&gt;, setting up your &lt;code&gt;SSH_AUTH_SOCKET&lt;/code&gt; in your &lt;code&gt;.envrc&lt;/code&gt; files is problematic if you don't have predictably-named socket-paths.&lt;/p&gt;

&lt;p&gt;How to solve this conundrum? Well, I finally got tired of, every time I rebooted my dev-console, having to run &lt;code&gt;eval $( ssh-agent )&lt;/code&gt; in per-project Xterms. So, I started googling and ultimately just dug through the man page for ssh-agent. In doing the latter, I found:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DESCRIPTION
ssh-agent is a program to hold private keys used for public key authentication. Through use of environment variables the
agent can be located and automatically used for authentication when logging in to other machines using ssh(1).

The options are as follows:

-a bind_address
Bind the agent to the UNIX-domain socket bind_address. The default is $TMPDIR/ssh-XXXXXXXXXX/agent.&amp;lt;ppid&amp;gt;.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, now I can add appropriate command-aliases to my bash profile(s) (which I've already moved to &lt;code&gt;~/.bash_profile.d/&amp;lt;PROJECT&amp;gt;&lt;/code&gt;)  that can be referenced based on where in my dev-console's filesystem hierachy I am and can set up my &lt;code&gt;.envrcs&lt;/code&gt;, too. Result: if I'm in &lt;code&gt;&amp;lt;CUSTOMER_1&amp;gt;/&amp;lt;PROJECT&amp;gt;/&amp;lt;TREE&amp;gt;&lt;/code&gt;, I get attached to an ssh-agent set up for &lt;em&gt;that&lt;/em&gt; customer's project(s); if I'm in &lt;code&gt;&amp;lt;CUSTOMER_2&amp;gt;/&amp;lt;PROJECT&amp;gt;/&amp;lt;TREE&amp;gt;&lt;/code&gt;, I get attached to an ssh-agent set up for &lt;em&gt;that&lt;/em&gt; customer's project(s); etc..&lt;/p&gt;

</description>
      <category>authentication</category>
      <category>git</category>
      <category>ssh</category>
    </item>
    <item>
      <title>Keeping It Clean: EKS and `kubectl` Configuration</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Thu, 20 Jun 2024 14:25:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/keeping-it-clean-eks-and-kubectl-configuration-a7m</link>
      <guid>https://dev.to/ferricoxide/keeping-it-clean-eks-and-kubectl-configuration-a7m</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/ferricoxide/crib-notes-accessing-eks-cluster-with-kubectl-2h1g-temp-slug-7611332"&gt;Previously&lt;/a&gt;, I was worried about, "how do I make it so that &lt;code&gt;kubectl&lt;/code&gt; can talk to my EKS clusters".  However, after several days of standing up and tearing down EKS clusters across a several accounts, I discovered that my &lt;code&gt;~/.kube/config&lt;/code&gt; file had absolutely &lt;em&gt;exploded&lt;/em&gt; in size and its manageability reduced to all but zero. And, while &lt;code&gt;aws eks update-kubeconfig --name &amp;lt;CLUSTER_NAME&amp;gt;&lt;/code&gt; is great, its lack of a &lt;code&gt;--delete&lt;/code&gt; suboption is kind of horrible when you want or need to clean out long-since-deleted clusters from your environment. So, onto "next best thing", I guess…&lt;/p&gt;

&lt;p&gt;Ultimately, that "next best thing" was setting a &lt;code&gt;KUBECONFIG&lt;/code&gt; environment-variable as part of my configuration/setup tasks (e.g., something like &lt;code&gt;export KUBECONFIG=${HOME}/.kube/config.d/MyAccount.conf&lt;/code&gt;). While not as good as I'd like to think a &lt;code&gt;aws eks update-kubeconfig --name &amp;lt;CLUSTER_NAME&amp;gt; --delete&lt;/code&gt; would be, it at least means that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Each AWS account's EKS's configuration-stanzas are kept wholly separate from each other&lt;/li&gt;
&lt;li&gt;Reduces cleanup to simply overwriting – or straight up nuking – per-account &lt;code&gt;${HOME}/.kube/config.d/MyAccount.conf&lt;/code&gt; files&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;…I tend to like to keep my stuff "tidy". This kind of configuration-separation facilitates scratching that (OCDish) itch. &lt;/p&gt;

&lt;p&gt;The above is derived, in part, from the &lt;em&gt;&lt;a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/"&gt;Organizing Cluster Access Using kubeconfig Files&lt;/a&gt;&lt;/em&gt; document&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cli</category>
      <category>eks</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>So You Work in Private VPCs and Want CLI Access to Your Linux EC2s?</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Thu, 16 May 2024 14:51:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/so-you-work-in-private-vpcs-and-want-cli-access-to-your-linux-ec2s-205g</link>
      <guid>https://dev.to/ferricoxide/so-you-work-in-private-vpcs-and-want-cli-access-to-your-linux-ec2s-205g</guid>
      <description>&lt;p&gt;Most of the AWS projects I work on, both currently and historically, have deployed most, if not all, of their EC2s into private VPC subnets. This means that, if one wants to be able directly login to their Linux EC2s' interactive shells, they're out of luck. Historically, to get something akin to direct access one had to set up bastion-hosts in a public VPC subnet, and then jump through to the EC2s one actually wanted to login to. How well one secured those bastion-hosts could make-or-break how well-isolated their private VPC subnets – and associated resources – &lt;em&gt;actually&lt;/em&gt; were.&lt;/p&gt;

&lt;p&gt;If you were the sole administrator or part of a small team, or were part of an arbitrary-sized administration-group that &lt;em&gt;all&lt;/em&gt; worked from a common network (i.e., from behind a corporate firewall or through a corporate VPN), keeping a bastion-host secure was fairly easy. All you had to do was set up a security-group that allowed only SSH connections and only allowed them from one or a few source IP addresses (e.g. your corporate firewall's outbound NAT IP address). For a bit of extra security, one could even prohibit password-based logins on the Linux bastions (instead, using SSH key-based login, SmartCards, etc. for authenticating logins). However, if you were a member of a team of non-trivial size and your team members were geographically-distributed, maintaining whitelists to protect bastion-hosts could become painful. That painfulness would be magnified if that distributed team's members were either frequently changing-up their work locations or were coming from locations where their outbound IP address would change with any degree of frequency (e.g., work-from-home staff whose ISPs would frequently change their routers' outbound IPs or people – like me – who habitually use VPN services).&lt;/p&gt;

&lt;p&gt;A few years ago, AWS introduced SSM and the ability to tunnel SSH connections through SSM (see the &lt;a href="https://repost.aws/knowledge-center/systems-manager-ssh-vpc-resources"&gt;re:Post article&lt;/a&gt; for more). With appropriate account-level security-controls, the need for dedicated bastion-hosts and maintenance of whitelists effectively vanished. Instead, all one had to do was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Register an SSH key to the target EC2s' account&lt;/li&gt;
&lt;li&gt;Set up their local SSH client to allow SSH-over-SSM&lt;/li&gt;
&lt;li&gt;Then SSH "directly" to their target EC2s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SSM would, effectively, "take care of the rest" …including logging of connections. If one were feeling &lt;em&gt;really&lt;/em&gt; enterprising, one could enable key-logging for those SSM-tunneled SSH connections (a good search-engine query should turn up configuration guides; one such guide is &lt;a href="https://www.toptal.com/aws/ssh-log-with-ssm"&gt;toptal's&lt;/a&gt;). This would, undoubtedly make your organization's IA team really happy (and may even be required depending on security-requirements your organization is legally-required to adhere to) – especially if they don't yet have an enterprise session-logging tool purchased.&lt;/p&gt;

&lt;p&gt;But what if your EC2s are hosting applications that &lt;em&gt;require&lt;/em&gt; GUI-based access to set up and/or administer? Generally, you have two choices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;X11 display-redirection&lt;/li&gt;
&lt;li&gt;SSH port-forwarding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unfortunately, SSM is a fairly low-throughput solution. So, while doing X11 display-redirection from an EC2 in a public VPC subnet may be more than adequately performant, the same cannot be said when done through an SSH-over-SSM tunnel. Doing X11 display-redirection of a remote browser session – or, worse, an entire graphical desktop session (e.g., KDE or Gnome desktops) – is &lt;em&gt;paaaaaainfully&lt;/em&gt; slow. For my own tastes, it's &lt;em&gt;uselessly&lt;/em&gt; slow. &lt;/p&gt;

&lt;p&gt;Alternately, one can use SSH port-forwarding as part of that SSH-over-SSM session. Then, instead of trying to send rendered graphics over the tunnel, one only sends the pre-rendered data. It's a much lighter traffic load with the result being a &lt;em&gt;much&lt;/em&gt; quicker/livelier response. It's also pretty easy to set up. Something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
ssh -L localhost:8080:$(
  aws ec2 describe-instances \
    --query 'Reservations[].Instances[].PrivateIpAddress' \
    --output text \ 
    --instance-ids &amp;lt;EC2_INSTANCE_ID&amp;gt;
  ):80 &amp;lt;USERID&amp;gt;@&amp;lt;EC2_INSTANCE_ID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Is all you need. In the above, the argument to the -L flag is saying, "set up a tcp/8080 listener on my local machine and have it forward connections to the remote machine's tcp/80". The local and remote ports can be varied for your specific needs. You can even set up dynamic-forwarding by creating a SOCKS proxy (but this document is meant to be a starting point, not dive into the weeds).&lt;/p&gt;

&lt;p&gt;Note that, while the above is using a subshell (via the $( … ) shell-syntax) to snarf the remote EC2's private IP address, one &lt;em&gt;should&lt;/em&gt; be able to simply substitute "localhost". I simply prefer to try to speak to the remote's ethernet, rather than loopback, interface, since doing so can help identify firewall-type issues that might interfere with others' use of the target service.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>connectivity</category>
      <category>ec2</category>
      <category>security</category>
    </item>
    <item>
      <title>ACTUALLY Deleting Emails in gSuite/gMail</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Fri, 06 Oct 2023 12:49:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/actually-deleting-emails-in-gsuitegmail-39oi</link>
      <guid>https://dev.to/ferricoxide/actually-deleting-emails-in-gsuitegmail-39oi</guid>
      <description>&lt;p&gt;Each month, I archive all the contents of my main gSuite account to a third-party repository. I do this via an IMAP-based transfer.&lt;/p&gt;

&lt;p&gt;Unfortunately, when you use an IMAP-based transfer to move files, Google doesn't &lt;em&gt;actually&lt;/em&gt; delete the emails from your gMail/gSuite account. No, it simply removes all labels from them. This means that instead of getting space back – space that Google charges for in gSuite – the messages simply become not-easily-visible within gMail. None of your space is freed up and, thus, space-charges for those unlabeled emails continue to accrue.&lt;/p&gt;

&lt;p&gt;Discovered this annoyance a couple years ago when my mail-client was telling me I was getting near the end of my quota. When I first got the quota-warning, I was like, "how??? I've offloaded all my old emails. There's only a month's worth of email in my Inbox, Sent folder and my per-project folders!" That prompted me to dig around to discover the de-labled/not-deleted fuckery. So, I dug around further to find &lt;a href="https://raisedbyturtles.org/view-unlabeled-gmail#method1"&gt;a method&lt;/a&gt; for viewing those de-labeled/not-deleted files. Turns out, putting:&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-has:userlabels -in:sent -in:chat -in:draft -in:inbox
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;p&gt;In your webmail search-bar will show you them …and allow you to delete them.&lt;/p&gt;

&lt;p&gt;My gSuite account was a couple years old when I discovered all this. So, when I selected all the unlabeled emails for deletion, it took a &lt;em&gt;while&lt;/em&gt; for Google to actually delete them. However, once the deletion completed, I recovered nearly 2GiB worth of space in my gSuite account.&lt;/p&gt;

</description>
      <category>gmail</category>
      <category>gsuite</category>
      <category>imap</category>
      <category>label</category>
    </item>
    <item>
      <title>TIL: I Am Probably Going To Come To Hate `fapolicyd`</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Tue, 11 Apr 2023 19:16:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/til-i-am-probably-going-to-come-to-hate-fapolicyd-4dd8</link>
      <guid>https://dev.to/ferricoxide/til-i-am-probably-going-to-come-to-hate-fapolicyd-4dd8</guid>
      <description>&lt;p&gt;One of the things I do in my role is write security automation. Part of that requires testing systems' hardening-compliance each time one of the security-benchmarks my customers use is updated.&lt;/p&gt;

&lt;p&gt;In a recent update, the benchmarks for Red Hat Enterprise Linux 8 (and derivatives) added the requirement to enable and run the application-whitelisting service, &lt;a href="https://github.com/linux-application-whitelisting/fapolicyd"&gt;fapolicyd&lt;/a&gt;. I didn't immediately notice this change …until I went to offload the security-scans from my test EC2 to an S3 bucket. The AWS CLI was suddenly broken.&lt;/p&gt;

&lt;p&gt;Worse, it was broken in an absolutely inscrutable way: if one executed &lt;em&gt;any&lt;/em&gt; AWS CLI command, even something as simple and basic as &lt;code&gt;aws help&lt;/code&gt;, it would &lt;em&gt;immediately&lt;/em&gt; return, having neither performed the requested action nor emitting an error. As an initial debug attempt, I did:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo $( aws help )$?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which got me the oh-so-useful &lt;code&gt;255&lt;/code&gt; for my troubles. But, it did allow me to start about to Googling. My first hit was &lt;a href="https://docs.aws.amazon.com/cli/latest/topic/return-codes.html#:~:text=255%20%2D%2D%20Command%20failed.,the%20request%20was%20made%20to."&gt;this guy&lt;/a&gt;. It had the oh-so-helpful guidance:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;255 -- Command failed. There were errors from either the CLI or the service the request was made to.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Like I said: "oh-so-helpful guidance".&lt;/p&gt;

&lt;p&gt;So, I opened a support request with Amazon. Initially, they were at least as in the dark as I was. &lt;/p&gt;

&lt;p&gt;Fortunately, another member of the team I work on noticed the support case when it came into his Inbox via our development account's auto-Cc configuration. Unlike me, he, apparently, hasn't deadmailed everything sent to that distro (which, given &lt;em&gt;how much&lt;/em&gt; is sent to that distro, deadmailing anything that arrived through the distro was the only way to preserve my sanity). He'd indicated that he had previously had similar issues and that he got around them by disabling the &lt;code&gt;fapolicyd&lt;/code&gt; service. I quickly tested stopping the service …and the AWS CLI happily resumed functioning as it had before the hardening tasks had executed.&lt;/p&gt;

&lt;p&gt;I knew that wholly disabling the service was &lt;em&gt;not&lt;/em&gt; going to be acceptable to our cyber-defense team. But, knowing &lt;em&gt;where&lt;/em&gt; the problem originated meant I at least now had a useful investigatory path.&lt;/p&gt;

&lt;p&gt;The first (seeming) solution I found was to execute something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fapolicyd-cli --file add /usr/local/bin/aws
fapolicyd -u
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allowed both the AWS CLI to function and for the &lt;code&gt;fapolicyd&lt;/code&gt; service to continue to be run.&lt;/p&gt;

&lt;p&gt;For better or worse, though, I'm a curious type. I wanted to see what the rule looked like that the &lt;code&gt;fapolicyd-cli&lt;/code&gt; utility had created. So, I dug around the documentation to find where I might be able to eyeball the exception.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# AUTOGENERATED FILE VERSION 2
# This file contains a list of trusted files
#
# FULL PATH SIZE SHA256
# /home/user/my-ls 157984 61a9960bf7d255a85811f4afcac51067b8f2e4c75e21cf4f2af95319d4ed1b87
/usr/local/aws-cli/v2/2.11.6/dist/aws 6658376 c48f667b861182c2785b5988c5041086e323cf2e29225da22bcd0f18e411e922
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which immediately rung alarm bells in my skull (strong "danger Will Robinson" vibes). By making the exception not only conditional on the binary's (real) path, but &lt;em&gt;also&lt;/em&gt; to its size and, particularly, its SHA256 signature, I knew that if anyone ever updated the installed binary, their exception-description was no longer going to match. This in turn would mean that the utility would stop working. Not wanting to deal with tickets that I could easily prevent, I continued my investigation.&lt;/p&gt;

&lt;p&gt;Knowing that what I &lt;em&gt;actually&lt;/em&gt; wanted was to give a blanket exemption to everything under the &lt;code&gt;/usr/local/aws-cli/v2&lt;/code&gt; directory, I started investigating how to do that. Ultimately, I came up with a exception-set that looked like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;allow perm=any all : dir=/usr/local/aws-cli/v2/ type=application/x-sharedlib trust 1
allow perm=any all : dir=/usr/local/aws-cli/v2/ type=application/x-executable trust 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I saved the contents as &lt;code&gt;/etc/fapolicyd/rules.d/80-aws.rules&lt;/code&gt; (the Red Hat documentation for &lt;code&gt;policyd&lt;/code&gt; had given some general guidance around naming of rule-files that this seemed to aligne with) and reloaded the &lt;code&gt;fapolicyd&lt;/code&gt; configuration. However, I was sadly disappointed to discover that the AWS CLI was still broken. On the plus side, it was broken &lt;em&gt;differently&lt;/em&gt;. Now, instead of immediately and silently exiting (with a &lt;code&gt;255&lt;/code&gt; error-code), it was giving me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[2030] Error loading Python lib '/usr/local/aws-cli/v2/2.11.6/dist/libpython3.11.so.1.0':
dlopen: /usr/local/aws-cli/v2/2.11.6/dist/libpython3.11.so.1.0: cannot open shared object
file: Operation not permitted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Much more meat to chew on. Further searching later, I had two additional commands to help me in my digging:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ausearch --start today -m fanotify --raw | aureport --file --summary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which gave me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;File Summary Report
===========================
total file
===========================
16 /usr/local/aws-cli/v2/2.11.6/dist/libz.so.1
16 /usr/local/aws-cli/v2/2.11.6/dist/libpython3.11.so.1.0
2 /usr/local/aws-cli/v2/2.11.6/dist/aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fapolicyd-cli --list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which gave me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; %languages=application/x-bytecode.ocaml,application/x-bytecode.python,
application/java-archive,text/x-java,application/x-java-applet,application/javascript,
text/javascript,text/x-awk,text/x-gawk,text/x-lisp,application/x-elc,text/x-lua,
text/x-m4,text/x-nftables,text/x-perl,text/x-php,text/x-python,text/x-R,text/x-ruby,
text/x-script.guile,text/x-tcl,text/x-luatex,text/x-systemtap
 1. allow perm=any uid=0 : dir=/var/tmp/
 2. allow perm=any uid=0 trust=1 : all
 3. allow perm=open exe=/usr/bin/rpm : all
 4. allow perm=open exe=/usr/libexec/platform-python3.6 comm=dnf : all
 5. deny_audit perm=any pattern=ld_so : all
 6. deny_audit perm=any all : ftype=application/x-bad-elf
 7. allow perm=open all : ftype=application/x-sharedlib trust=1
 8. deny_audit perm=open all : ftype=application/x-sharedlib
 9. allow perm=execute all : trust=1
10. allow perm=open all : ftype=%languages trust=1
11. deny_audit perm=any all : ftype=%languages
12. allow perm=any all : ftype=text/x-shellscript
13. allow perm=any all : dir=/usr/local/aws-cli/v2/ type=application/x-sharedlib trust 1
14. allow perm=any all : dir=/usr/local/aws-cli/v2/ type=application/x-executable trust 1
15. deny_audit perm=execute all : all
16. allow perm=open all : all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which told me why the binary was at least &lt;em&gt;trying&lt;/em&gt; to work, but was unable to load its shared-library. Since I'd named the file &lt;code&gt;/etc/fapolicyd/rules.d/80-aws.rules&lt;/code&gt;, it was loading my exceptions later than a rule that was preventing access to shared libraries not in standard trust-paths. In the above, it was the file that created rule #8.&lt;/p&gt;

&lt;p&gt;I &lt;code&gt;grep&lt;/code&gt;ped through the &lt;code&gt;/etc/fapolicyd/rules.d/&lt;/code&gt; directory looking for the file that created rule #8. Having found it, I moved my rule-file upwards with a quick &lt;code&gt;mv /etc/fapolicyd/rules.d/{8,3}0-aws.rules&lt;/code&gt; and reloaded my rules. This time, my rules-list came up like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; %languages=application/x-bytecode.ocaml,application/x-bytecode.python,
application/java-archive,text/x-java,application/x-java-applet,application/javascript,
text/javascript,text/x-awk,text/x-gawk,text/x-lisp,application/x-elc,text/x-lua,
text/x-m4,text/x-nftables,text/x-perl,text/x-php,text/x-python,text/x-R,text/x-ruby,
text/x-script.guile,text/x-tcl,text/x-luatex,text/x-systemtap
 1. allow perm=any uid=0 : dir=/var/tmp/
 2. allow perm=any uid=0 trust=1 : all
 3. allow perm=open exe=/usr/bin/rpm : all
 4. allow perm=open exe=/usr/libexec/platform-python3.6 comm=dnf : all
 5. allow perm=any all : dir=/usr/local/aws-cli/v2/ type=application/x-sharedlib trust 1
 6. allow perm=any all : dir=/usr/local/aws-cli/v2/ type=application/x-executable trust 1
 7. deny_audit perm=any pattern=ld_so : all
 8. deny_audit perm=any all : ftype=application/x-bad-elf
 9. allow perm=open all : ftype=application/x-sharedlib trust=1
10. deny_audit perm=open all : ftype=application/x-sharedlib
11. allow perm=execute all : trust=1
12. allow perm=open all : ftype=%languages trust=1
13. deny_audit perm=any all : ftype=%languages
14. allow perm=any all : ftype=text/x-shellscript
15. deny_audit perm=execute all : all
16. allow perm=open all : all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With my &lt;code&gt;sharedlib&lt;/code&gt; allow-rule now ahead of the default-deny &lt;code&gt;sharedlib&lt;/code&gt; rule, I tested out the AWS CLI command again. Success!&lt;/p&gt;

&lt;p&gt;Unfortuantely, while I solved the problem I set out to solve, my &lt;code&gt;ausearch&lt;/code&gt; output was telling me that a few other standard tools were also likely having similar whitelisting issues. Ironically, those "other standard tools" are &lt;em&gt;all&lt;/em&gt; security-related.&lt;/p&gt;

&lt;p&gt;Fun fact: a number of security-vendors write their products for Windows, first and foremost. Their Linux tooling is almost an afterthought. As such, it's often not well-delivered: if they deliver their software in RPMs &lt;em&gt;at all&lt;/em&gt;, the RPMs are often poorly-constructed. I almost never see &lt;strong&gt;signed&lt;/strong&gt; RPMS from security vendors. When I do actually get signed RPMs, they're very rarely signed in a way that's compatible with a Red Hat system that's configured to run in FIPS-mode. So, I guess I shouldn't be super surprised that these same security-tools aren't aware of the need or how to work with fapolicyd. Oh well, that's someone else's problem (realistically, probably "future me's").&lt;/p&gt;

</description>
      <category>aws</category>
      <category>fapolicyd</category>
      <category>rhel8</category>
    </item>
    <item>
      <title>Crib Notes: Assuming a Role</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Fri, 07 Apr 2023 16:21:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/crib-notes-assuming-a-role-ejo</link>
      <guid>https://dev.to/ferricoxide/crib-notes-assuming-a-role-ejo</guid>
      <description>&lt;p&gt;Several of my current customers leverage AWS IAM's role-assumption capability. In particular, one of my customers leverages it for automating the execution of the Terragrunt-based IaC. For the automated-execution, they run the Terragrunt code from an EC2 that has an attached IAM role that allows code executed on the hosting-EC2 to assume roles in other accounts.&lt;/p&gt;

&lt;p&gt;Sometimes, when writing updates to their Terragrunt code, it's helpful to be able to audit the target account's state before and after the execution, but outside the context of Terragrunt, itself. In these cases, knowing how to use the AWS CLI to switch roles can be quite handy. A quick one-liner template for doing so looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ eval "$(
aws sts assume-role \
  --role-arn "arn:&amp;lt;AWS_PARTITION&amp;gt;:iam::&amp;lt;TARGET_ACCOUNT_NUMBEr&amp;gt;:role/&amp;lt;TARGET_ROLE_NAME&amp;gt;" \
  --role-session-name &amp;lt;userid&amp;gt; --query 'Credentials' | \
awk '/(Key|Token)/{ print $0 }' | \
sed -e 's/",$/"/' \
    -e 's/^\s*"/export /' \
    -e 's/": "/="/' \
    -e 's/AccessKeyId/AWS_ACCESS_KEY_ID/' \
    -e 's/SecretAccessKey/AWS_SECRET_ACCESS_KEY/' \
    -e 's/SessionToken/AWS_SESSION_TOKEN/'
)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What the above does is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Opens a subshell to execute a series of commands in&lt;/li&gt;
&lt;li&gt;Executes &lt;code&gt;aws sts assume-role&lt;/code&gt; to fetch credentials, in JSON format, for accessing the target AWS account as the target IAM role&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;awk&lt;/code&gt; to select which parts of the prior command's JSON output to keep (&lt;code&gt;grep&lt;/code&gt; or others are likely more computationally-efficient, but you get the idea)&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;sed&lt;/code&gt; to convert the JSON parameter/value pair-strings into BASH-compatible environment-variable delcarations&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;eval&lt;/code&gt; to take the output of the subshell and read it into the current shell's environment&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once this is executed, your SHELL will grant you privileges to execute commands in the target account – be that using the AWS CLI or any other tool that understands the "AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY" and "AWS_SESSION_TOKEN" environment variables.&lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;aws sts get-caller-identity&lt;/code&gt; will allow you to see your new IAM role.&lt;/p&gt;

</description>
      <category>assumerole</category>
      <category>aws</category>
      <category>iam</category>
    </item>
    <item>
      <title>Why Prompt Uncomfortable Questions</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Wed, 08 Mar 2023 16:08:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/why-prompt-uncomfortable-questions-2jj1</link>
      <guid>https://dev.to/ferricoxide/why-prompt-uncomfortable-questions-2jj1</guid>
      <description>&lt;p&gt;I do automation work for a number of enterprise customers. Each of these customers deploys Red Hat and Red Hat derivatives in their hosting environments (be those environments physical, on-premises virtualized or "in the cloud"). Basically, they use "for real" Red Hat for their production systems, then use one of the free clones for the development efforts …until the CentOS 8 Core → CentOS 8 Stream debacle, they used Centos for the development efforts. My focus with such customers is almost always constrained to their cloud-hosted Enterprise Linux hosts.&lt;/p&gt;

&lt;p&gt;However, due to that CentOS 8 Stream debacle, one of my customers fell for Oracle's "use OL8: it's free" come-on. Now, if you've been in IT for even five minutes, you realize that anything "free" offered by Oracle is only so offered because Oracle sees it as a foot in the door. That said, with OL8, specifically, they try to add features to it to make it "a more compelling offering" (or something). Unfortunately, to date, the deltas between OL8 and RHEL8 are significantly greater than between RHEL8 and Rocky 8 or Alma 8 ...and not for the better. Automation that my team has written that "just works" for creating RHEL 8, Rocky 8 and Alma 8 AMIs and VM-templates doesn't "just work" for creating OL8 AMIs.&lt;/p&gt;

&lt;p&gt;For example, while Rocky 8, Alma 8 and CentOS 8 Stream (and CentOS 8 Core, before it) all know how to do weak dependency resolution, OL8 doesn't (or, at least, didn't until &lt;em&gt;maybe&lt;/em&gt; recently: a couple months after reporting the issue to Oracle via their bugzilla, they finally sent back a notification saying the problem's fixed, I just haven't had the opportunity to verify).&lt;/p&gt;

&lt;p&gt;At any rate, each time I've run into various brokenness within OL8, the customer's Oracle sales team keeps trying to sell support contracts.&lt;/p&gt;

&lt;p&gt;Similarly, when trying to tout the value of OL8, they tried to flog OL8's ksplice feature. When I asked "why would I use that when RHEL 8 and all of its non-Oracle derivatives have kpatch", the Oracle rep responded back with a list of things that ksplice is able to do but kpatch isn't. That said, the wording of his response elicited a "but you also seem to be saying that feature isn't in the free edition" response from me. Eventually, the representative replied back saying that my assessment was accurate – the flogged feature wasn't in the free edition.&lt;/p&gt;

&lt;p&gt;He also tried to salvage the "Oracle support" thread by pointing out that Oracle would also provide support for my customer's Red Hat systems under an uber-contract. Now, my customer uses pay as you go (PAYG) EC2s in AWS but wanted free EL8 alternatives – thus the consideration of OL8 – for their non-production workloads. As such, if they're doing PAYG RHEL instances and wanting free OL8 for their non-production workloads, why would suggesting my customer buy Oracle's support for all of them make any sense? I mean, if my customer were not doing PAYG RHEL instances, they've presumably bought instance-licenses (and support along with it) from Red Hat, so, again, "why am would they want to buy Oracle's support for them"&lt;/p&gt;

&lt;p&gt;…similarly, if they're already doing static licensing for RHEL instances, then they're probably also managing their instance-licenses through Satellite (etc.). As such, they'd then be able to take advantage of the "free for developers" licenses for their non-production EC2s …at which point the question would be "why would they even bother with OL8" let alone have to ask the "why would we buy Oracle's support for them" question?&lt;/p&gt;

&lt;p&gt;Yeah, I get that Oracle sees OL8 more as a foot-in-the-door than an actual product, but still: send me information that doesn't beg questions that are going to be uncomfortable for you to answer.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Code Explainer: Regex and Backrefs in Ansible Code</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Fri, 23 Sep 2022 18:43:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/code-explainer-regex-and-backrefs-in-ansible-code-gn7</link>
      <guid>https://dev.to/ferricoxide/code-explainer-regex-and-backrefs-in-ansible-code-gn7</guid>
      <description>&lt;p&gt;Recently, I'd submitted a code-update to a customer-project I was working on. I tend to write very dense code, even when using simplification frameworks like Ansible. As a result, I had to answer some questions asked by the person who did the code review. Ultimately, I figured it was worth writing up an explainer of what I'd asked them to review…&lt;/p&gt;

&lt;p&gt;The Ansible-based code in question was actually just &lt;em&gt;one&lt;/em&gt; play:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Remove sha512 from the LOG option-list
  ansible.builtin.lineinfile:
    backrefs: true
    line: '\g&amp;lt;log&amp;gt;\g&amp;lt;equals&amp;gt;\g&amp;lt;starttoks&amp;gt;\g&amp;lt;endtoks&amp;gt;'
    path: /etc/aide.conf
    regexp: '^#?(?P&amp;lt;log&amp;gt;LOG)(?P&amp;lt;equals&amp;gt;(\s?)(=)(\s?))(?P&amp;lt;starttoks&amp;gt;.*\w)(?P&amp;lt;rmtok&amp;gt;\+?sha512)(?P&amp;lt;endtoks&amp;gt;\+?.*)'
    state:present
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above is meant to ensure that the contents of the RHEL 7 config file, "/etc/aide.conf" sets the propper options on for the defined scan-definition, "LOG". The original contents of the line were:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LOG = p+u+g+n+acl+selinux+ftype+sha512+xattrsfor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The STIGs were updated to indicate that the contents of that line should actually be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LOG = p+u+g+n+acl+selinux+ftype+xattrsfor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The values of the Ansible play's regexp and backrefs attributes are designed to use the advanced line-editing afforded through the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/lineinfile_module.html"&gt;Ansible lineinfile module's&lt;/a&gt; capability. Ansible is a Python-based service. This module's advanced line-editing capabilities are implemented using Python's &lt;a href="https://docs.python.org/3/library/re.html"&gt;re() function&lt;/a&gt;. The regexp attribute's value is written to make use of the re() function's ability to do referenceable search-groupings. Search-groupings are specified using parenthesis-delimited search-rules (i.e., "(SEARCH_SYNTAX)").&lt;/p&gt;

&lt;p&gt;By default, a given search-grouping is referenced by a left-to-right index-number. These number starting at "1". The reference-IDs can then be referenced – also refered to as "backrefs" – in the replacement-string (through the value of the play's line attribute) to help construct the replacement string's value. Using the index-number method, the replacement-string would be "\1\2\6\8" …which isn't exactly self-explanatory.&lt;/p&gt;

&lt;p&gt;To help with readability, each group can be explicitly-named. To assign a name to a search-group, one uses the syntax ?P&amp;lt;LABEL_NAME&amp;gt; at the beginning of the search-group. Once the group is assigned a name, it can subsequently be referenced by that name using the syntax "\g&amp;lt;LABEL_NAME&amp;gt;".&lt;/p&gt;

&lt;p&gt;If one visits the &lt;a href="https://dev.to/scottw/-regex-101-ei9-temp-slug-1258619"&gt;Regex101&lt;/a&gt; web-site and selects the "Python" regex-type from the left menu, one can get a visual representation of how the above regexp gets interpreted. Enter the string to be evaluated in the "TEST STRING" section and then enter the value of the regexp parameter in the REGULAR EXPRESSION box. The site will then show you how the regex chops up the test string and tell you why it chopped it up that way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qILEZtoS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRPrPUtCDftaVz2-_MvC5uFnP63LUX710XSRffuucoK-OT5jIO0rFTtWUrw4Mmei0sKX2fiecY8lJ5k1Uos8aHJwUA4m3Od29Z4hd2SA05sMxSrbqEczDgx9a5t9FTSNk2jKhfxAr8ghzjNUkLNGtFen6GTYL--nh1puCJsn7_cOt-euBUSrGL9-1L/w790-h479/Regex101.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qILEZtoS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRPrPUtCDftaVz2-_MvC5uFnP63LUX710XSRffuucoK-OT5jIO0rFTtWUrw4Mmei0sKX2fiecY8lJ5k1Uos8aHJwUA4m3Od29Z4hd2SA05sMxSrbqEczDgx9a5t9FTSNk2jKhfxAr8ghzjNUkLNGtFen6GTYL--nh1puCJsn7_cOt-euBUSrGL9-1L/w790-h479/Regex101.png" alt="Regex101 Screen-cap" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>lineinfile</category>
      <category>regex</category>
    </item>
    <item>
      <title>Dense Coding and Code Explainers</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Tue, 23 Aug 2022 12:51:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/dense-coding-and-code-explainers-1bmo</link>
      <guid>https://dev.to/ferricoxide/dense-coding-and-code-explainers-1bmo</guid>
      <description>&lt;p&gt;My role with my employer is frequently best described as "smoke jumper". That is, a given project they're either priming or, more frequently, subbing on is struggling and the end customer has requested further assistance getting things more on the track they were originally expecting to be on. How I'm usually first brought onto such projects is automation-support "surge".&lt;/p&gt;

&lt;p&gt;In this context, "surge" means resources brought in to either fill gaps in the existing project-team created by turnover or augmenting that team with additional expertise. Most frequently, that's helping them either update and improve existing automation or write new automation. Either way, the code I tend to deliver tends to be both fairly compact and dense as well as flexible compared to what they've typically delivered to date.&lt;/p&gt;

&lt;p&gt;One of my first-principles in delivering new functionality is to attempt to do so in a way that is easily deactivated or backed out. This project team, like others I've helped, uses Terraform, but in a not especially modular or function-isolating way. All of the deployments consist of main.tf, vars.tf and outputs.tf files and occational "template" files (usually simple &lt;a href="https://tldp.org/LDP/abs/html/here-docs.html"&gt;HERE documents&lt;/a&gt; with simple variable-substitution actions). While they &lt;em&gt;do&lt;/em&gt; (fortunately) make use of some data providers, they're not real disciplined about &lt;em&gt;where&lt;/em&gt; they implement them. They embed them in either or both of a given service's the main.tf and vars.tf files. Me? I generally like all of my data-providers in data.tf type of files as it aids consistency and keeps the &lt;em&gt;type&lt;/em&gt; of contents in the various, individual files "clean" from an offered-functionality standpoint.&lt;/p&gt;

&lt;p&gt;Similarly, if I'm using templated content, I prefer to deliver it in ways that sufficiently externalizes the content to allow appropriate linters to be run on it. This kind of externalization not only allows such files to be more easily linted but, because it tends to remove encapsulation effects, it tends to make either debugging or extending the externalized content easier to do.&lt;/p&gt;

&lt;p&gt;On a recent project, I was tasked with helping them automate the deployment of VPC endpoints into their AWS accounts. The customer was trying, to the greatest extent possible, prevent as much of their project-traffic from leaving their VPCs as possible.&lt;/p&gt;

&lt;p&gt;When I started the coding-task, the customer wasn't able to tell me &lt;em&gt;which&lt;/em&gt; specific services they wanted or needed so-enabled. Knowing that each such service-endpoint comes with recurring costs and not wanting them to accidentally break the bank, I opted to write my code in a way that, absent operator input, would deploy &lt;em&gt;all&lt;/em&gt; AWS endpoint services into their VPCs but also easily allow them to easily dial things back when the first (shocking) bills came due.&lt;/p&gt;

&lt;p&gt;The code I delivered worked well. However, as familiar with the framework as the incumbent team was, they were left a bit perplexed by the code I delivered. They asked me to do a walkthrough of the code for them. Knowing the history of the project – both from a paucity-of-documentation and staff-churn perspective – I opted to write an explainer document. What follows is that explanation.&lt;/p&gt;

&lt;p&gt;Firstly, I delivered my contents as four, additional files rather than injecting my code into their existing  main.tf, vars.tf and outputs.tf file-set. Doing so allowed them to wholly disable functionality simply by nuking the files I delivered rather than having to do file-surgery on their normal file-set. As my customer is operating in multiple AWS partitions, this makes dealing with partitions' API differences easier to roll back changes if the deployment-partition's APIs are older then their development-partition's are. The file-set I delivered was an endpoints_main.tf, endpoints_data.tf,  endpoints_vars.tf and an endpoints_services.tpl.hcl file. Respectively these files encapsulate: primary functionality; data-provider definitions; definition of variables used in the "main" and "data" files; and an HCL-formatted default-endpoints template-file.&lt;/p&gt;

&lt;p&gt;The most basic/easily-explained file is the default-endpoints template file, &lt;a href="https://github.com/ferricoxide/BloggerBits/blob/main/EndpointsArticle/endpoint_services.tpl.hcl"&gt;endpoints_services.tpl.hcl&lt;/a&gt;. The file consists of map-objects encapsulated in a larger list structure. The map-objects consist of name and type attribute-pairs. The name values were derived by executing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-vpc-endpoint-services \
  --query 'ServiceDetails[].{Name:ServiceName,Type:ServiceType[].ServiceType}' | \
sed -e '/\[$/{N;s/\[\n *"/"/;}' -e '/^[][]*]$/d' | \
tr '[:upper:]' '[:lower:]'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then changing the literal region-names to the string "${endpoint-region}". This change allows Terraform's &lt;a href="https://www.terraform.io/language/functions/templatefile"&gt;templatefile()&lt;/a&gt; function to sub in the desired-value when the template file is read – making the automation portable across both regions and partitions. The template-file's contents are also encapsulate with Terraform's &lt;a href="https://www.terraform.io/language/functions/jsondecode"&gt;jsondecode()&lt;/a&gt; function. This encapsulation is necesary to allow the templatefile() function to properly read the file in (so that the variable-substitution can occur).&lt;/p&gt;

&lt;p&gt;Because I wanted the use of this template-file to be the fallback (default) behavior, I needed to declare its use as a fallback. This was done in the &lt;a href="http://endpoint_data.tf"&gt;endpoint_data.tf&lt;/a&gt; file's locals {} section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  vpc_endpoint_services = length(var.vpc_endpoint_services) == 0 ? jsondecode(
    templatefile(
      "./endpoint_services.tpl.hcl",
      {
        endpoint_region = var.region
      }
    )
  ) 
}  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above, we're using a &lt;a href="https://www.terraform.io/language/expressions/conditionals"&gt;ternary&lt;/a&gt; evaluation to set the value of the locally-scoped vpc_endpoint_services variable. If the size of the globally-scoped vpc_endpoint_services variable is "0", &lt;em&gt;then&lt;/em&gt; the template file is used; otherwise, the content of the globally-scoped vpc_endpoint_services variable is used. The template-file's use is effected by using the templatefile() function to read the file in while substituting all occurrences of "${endpoint-region}" in the file with the value of the globally-scoped "region" variable.&lt;/p&gt;

&lt;p&gt;Note: The surrounding jsondecode() function is used to convert the file-stream from the format previously set using the jsonencode() function at the beginning of the file. I'm not a fan of having to resort to this kind of kludgery, but, without it, the templatefile() function would error out when trying to populate the vpc_endpoint_services variable. If any reader has a better idea of how to attain the functionality desired in a less-kludgey way, please comment.&lt;/p&gt;

&lt;p&gt;Where my customer needed the most explanation was the logic in the section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_vpc_endpoint_service" "this" {
  for_each = {
    for service in local.vpc_endpoint_services :
      "${service.name}:${service.type}" =&amp;gt; service
  }

  service_name = length(
    regexall(
      var.region,
      each.value.name
    )
  ) == 1 ? each.value.name : "com.amazonaws.${var.region}.${each.value.name}"

  service_type = title(each.value.type)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section leverages Terraform's &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpc_endpoint_service"&gt;aws_vpc_endpoint_service&lt;/a&gt; data-source. My code gives it the reference id "this". Not a terribly original or otherwise noteworthy label, but, absent the need for multiple such references, it will do.&lt;/p&gt;

&lt;p&gt;The for_each function iterates over the values stored in the locally-scoped vpc_endpoint_services object-variable. As it loops, assigns each dictionary-object – the name and type attribute-pairs – to the service loop-variable. In turn, the loop iteratively-exports an each.value.name and each.value.type variable.&lt;/p&gt;

&lt;p&gt;I &lt;em&gt;could&lt;/em&gt; have set the service_name variable to more-simply equal the each.value.name variable's value, however, I wanted to make life a bit less onerous for the automation-user. Instead of needing to specify the &lt;em&gt;full&lt;/em&gt; service-name path-string, the short-name could be specified. Using the &lt;a href="https://www.terraform.io/language/functions/regexall"&gt;regexall()&lt;/a&gt; function to see if the value of the region globally-scoped variable was present in the each.value.name variable's value allows the &lt;a href="https://www.terraform.io/language/functions/length"&gt;length()&lt;/a&gt; function to be used as part of a ternary definition for the service_name variable. If returned length is "0", the operator-passed service-name is prepended with the fully-qualified service-path typically valid for the partition's region; if the returned length is "1", then the value already stored in the each.value.name variable is used. &lt;/p&gt;

&lt;p&gt;Similarly, I didn't want the operator to need to care about the case of the service-type they were specifying. As such, I let Terraform's title(function) take care of setting the proper case of the each.value.type variable's value.&lt;/p&gt;

&lt;p&gt;The service_type and service_name values are then returned when the data-provider is called from the &lt;a href="http://endpoint_main.tf"&gt;endpoint_main.tf&lt;/a&gt; file's locals {} block is processed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  […elided…]
  # Split Endpoints by their type
  gateway_endpoints = toset(
    [
      for e in data.aws_vpc_endpoint_service.this :
        e.service_name if e.service_type == "Gateway"
    ]
  )

  interface_endpoints = toset(
    [
      for e in data.aws_vpc_endpoint_service.this :
        e.service_name if e.service_type == "Interface"
    ]
  )
  […elided…]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The gateway_endpoints and interface_endpoints locally-scoped variables are each list-variables. Each is populated by taking the service-name returned from the data.aws_vpc_endpoint_service.this data-provider if the selected-for service_type value matches. This list-vars are then iteratively-processed in the relevant resource "aws_vpc_endpoint" "interface_services" and resource "aws_vpc_endpoint" "gateway_services" stanzas.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>vpc</category>
    </item>
    <item>
      <title>Hop, Skip and a Jump</title>
      <dc:creator>Thomas H Jones II</dc:creator>
      <pubDate>Fri, 03 Jun 2022 19:50:00 +0000</pubDate>
      <link>https://dev.to/ferricoxide/hop-skip-and-a-jump-lmh</link>
      <guid>https://dev.to/ferricoxide/hop-skip-and-a-jump-lmh</guid>
      <description>&lt;p&gt;A few weeks ago, I got assigned to a new project. Like a lot of my work, it's fully remote. Unlike most of my prior such gigs, while the customer &lt;em&gt;does&lt;/em&gt; implement network-isolation for their cloud-hosted resources, they aren't leveraging any kind of trusted developer desktop solution (virtualized – cloud-hosted or otherwise – or via customer-issued, hardened, VPN-enabled laptop). Instead, they have per-environment bastion-clusters and leverage IP white-listing to allow remote access to those bastions. To make managing that white-listing less onerous, they require each of their vendors to coalesce all of the vendor-employees behind a single origin-IP.&lt;/p&gt;

&lt;p&gt;Working for a small company, the way we ended up implementing things was to put a Linux-based EC2 (our "jump-box") behind an EIP. The customer adds &lt;em&gt;that&lt;/em&gt; IP to their bastions' whitelist-set. That EC2 is also configured with a default-deny security-group with each of the team members' "work-from" (usually "home") IP addresses whitelisted.&lt;/p&gt;

&lt;p&gt;Not wanting to incur pointless EC2 charges, the EC2 is in a single-node AutoScaling Group (ASG) with scheduled scaling actions. At the beginning of each business day, the scheduled scaling-action takes the instance-count from 0 to 1. Similarly, at the end of each business day, the scheduled scaling-action takes the instance-count from 1 to 0. &lt;/p&gt;

&lt;p&gt;This deployment-management choice also has the benefit of not only reducing compute-costs but ensures that there's not a host available to attack outside of business hours (in case the the default-deny + whitelisted source IPs isn't enough protection). Since the auto-scaled instance's launch-automation includes an "apply all available patches" action, it means that day's EC2 is fully updated with respect to security and other patches. Further, it means that on the off chance that someone &lt;em&gt;had&lt;/em&gt; broken into a given instantiation, any beachhead they establish goes "poof!" when the end-of-day scale-to-zero action occurs.&lt;/p&gt;

&lt;p&gt;Obviously, it's not an absolutely 100% bulletproof safety-setup, but it does raise the bar fairly high for would-be attackers&lt;/p&gt;

&lt;p&gt;At any rate, beyond our "jump box" are the customer's bastion nodes and their trusted IPs list. From the customer-bastions, we can then access the hosts that they have configured for development activities to be run from. While they don't rebuild their bastions or their "developer host" instances as frequently as we do our "jump box", we have been trying to nudge them in a similar direction.&lt;/p&gt;

&lt;p&gt;For further fun, the customer-systems require using a 2FA token to access. Fortunately, they use PIN-protected SmartCards rather than something like RSA fobs. &lt;/p&gt;

&lt;p&gt;Overall, to get to the point where I'm able to either SSH into the customer's "developer host" instances or use VSCode's git-over-ssh capabilities, I have to go:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Laptop&lt;/li&gt;
&lt;li&gt;(Employer's) Jump Box&lt;/li&gt;
&lt;li&gt;(Customer's) Bastion&lt;/li&gt;
&lt;li&gt;(Customer's) Development host&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Wanting to keep my customer-work as close to completely separate from the rest of my laptop's main environment, I use Hyper•V to run a purpose-specific RHEL8 VM. For next-level fun/isolation/etc., my VM's vDisk is LUKS-encrypted. I configure my VM to provide token-passthrough, to make it easy to do my SmartCard-authenticated access to the customer-system(s). But, still, there's a whole lot of hop-skip-jump just to be able to start running my code-editor and pushing commits to their git-based SCM host.&lt;/p&gt;

&lt;p&gt;During screen-sharing sessions, I've observed both my company's other consultants and the customer's other vendors' consultants executing these long-assed SSH commands. Basically, they do something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -N -L &amp;lt;LOCAL_PORT&amp;gt;:&amp;lt;REMOTE_HOST&amp;gt;:&amp;lt;REMOTE_PORT&amp;gt; &amp;lt;USER&amp;gt;@&amp;lt;REMOTE_HOST&amp;gt; -i ~/.ssh/key.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;…and they do it for each of hosts 2-4 (or just 3 &amp;amp; 4 for the consultants that are VPNing to a trusted network). Further, to keep each hop's connection open, they fire up &lt;code&gt;top&lt;/code&gt; (or similar) after each hop's connection is established.&lt;/p&gt;

&lt;p&gt;I'm a lazy typist. So, &lt;em&gt;just one&lt;/em&gt; of those ssh invocations makes my soul hurt. In general, I'm a big fan of the capabilities afforded by using a suitably-authored ${HOME}/.ssh/config file. Prior to this engagement, I mostly used mine just to set up host-aliases and ensure that things like SSH key and X11 forwarding are enabled. However, I figured there was a way to further configure things to result in a &lt;em&gt;lot&lt;/em&gt; fewer key-strokes and typing for this project's connection-needs. So, started digging around.&lt;/p&gt;

&lt;p&gt;Ultimately, I found that OpenSSH's client-configuration offers a beautiful option for making my life require far fewer keystrokes and eliminate the need for starting up "keep the session alive" processes. That option is the "ProxyJump" directive (combined with suitable "LocalForward" and, while we're at it, "User" directives). In short, what I did was I set up one stanza to define my connection to my "jump box". Then added a stanza that defines my connection to the customer's bastion, using the "ProxyJump" directive to tell it "use the jump box to reach the bastion host". Finally, I added a stanza that defines my connection to the customer's development host,  using the "ProxyJump" directive to tell it "use the bastion host to reach the development host". Since I've also added requisite key- and X11-forwarding directives as well as remote service-tunneling directives, all I have to do is type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh &amp;lt;CUSTOMER&amp;gt;-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And, after the few seconds it takes the SSH client to negotiate three, linked SSH connections, I'm given a prompt on the development host. No need to type "ssh …" multiple times and no need to start &lt;code&gt;top&lt;/code&gt; on each hop. &lt;/p&gt;

&lt;p&gt;Side note: since each of the hops also implement login banners, adding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LogLevel error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To each stanza saves a &lt;em&gt;crapton&lt;/em&gt; of banner-text flying by and polluting your screen (and preserving scrollback buffer!).&lt;/p&gt;

&lt;p&gt;As a bit of a closing note: if any of the intermediary nodes are likely to change with any frequency and change in a way that causes a given remote's HostKey to change, adding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UserKnownHostsFile /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To your config will save you from polluting your ${HOME}/.ssh/known_hosts file with no-longer-useful entries for a given SSH host-alias. Similarly, if you want to suppress the "unknown key" prompts, you can add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;StrictHostKeyChecking false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To a given host's configuration-stanza.  &lt;strong&gt;Warning: the accorded convenience &lt;em&gt;does&lt;/em&gt; come with the potential cost of exposing you to undetected man-in-the-middle attacks.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>remotedevelopment</category>
      <category>ssh</category>
      <category>tunnel</category>
    </item>
  </channel>
</rss>
