<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alex Eversmeyer</title>
    <description>The latest articles on DEV Community by Alex Eversmeyer (@alexeversmeyer).</description>
    <link>https://dev.to/alexeversmeyer</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexeversmeyer"/>
    <language>en</language>
    <item>
      <title>Demo Mode: Using State in Streamlit</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Tue, 17 May 2022 23:08:56 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/demo-mode-using-state-in-streamlit-1482</link>
      <guid>https://dev.to/alexeversmeyer/demo-mode-using-state-in-streamlit-1482</guid>
      <description>&lt;p&gt;My &lt;a href="https://skyboy.app"&gt;Skyboy&lt;/a&gt; application is a niche product, but I think it's pretty cool! (Let's be honest, I'm a little biased. I did write it for myself, after all.) It's a Python web app using the &lt;a href="https://streamlit.io"&gt;Streamlit&lt;/a&gt; framework.&lt;/p&gt;

&lt;p&gt;Initially, there wasn't been a way to preview what Skyboy can do with FPV drone flight telemetry data. While the file upload function works, it hasn't been generalized enough (probably) to handle telemetry logs from other users. Having demo data available allows users to preview the app's functionality, which is particularly useful for anyone who doesn't have telemetry logs of their own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing Demo Data
&lt;/h2&gt;

&lt;p&gt;My first task was to decide where to store the demo telemetry file. I could have packaged it up into the app's Docker container, but I wanted to start integrating other AWS services and so I uploaded the file to an S3 bucket.&lt;/p&gt;

&lt;p&gt;I played around with trying to create a signed URL to grant the application access to the file, but was unable to make that work. My suspicion is that it had something to do with the file's access permissions. After several attempts, I backed up and decided to go a simpler route. I created an IAM user whose only permissions are for read access to that S3 bucket, and stored that user's credentials as Secrets on GitHub so they could be injected into the application when its image is built.&lt;/p&gt;

&lt;p&gt;From there, I wrote a simple function that creates an S3 client using boto3 and downloads the file to memory. I coded a 'Demo' button and set it to run the download function, and made sure the visualization section of the code's logic checked whether or not the button had been clicked.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disappearing Act
&lt;/h2&gt;

&lt;p&gt;So far, so good: clicking the button downloaded the file, and the rest of the code read the file and produced its usual visualizations. Now, let's see those numbers in imperial units...&lt;/p&gt;

&lt;p&gt;&lt;em&gt;-selects the appropriate checkbox-&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;... wait a second. Where did all the fancy metrics and charts go?&lt;/p&gt;

&lt;p&gt;With an uploaded telemetry file, selecting that checkbox converts the units and displays feet, miles per hour, and so on. But with the demo data, the application reset itself to its initial appearance. Confused, I attempted to solve this bug by rearranging some of my logic, but after a few increasingly frustrating attempts, I shut my computer down and walked away for the night.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Stateful Solution
&lt;/h2&gt;

&lt;p&gt;The next morning, I re-engaged with this problem by going back through Streamlit's &lt;a href="https://docs.streamlit.io"&gt;documentation&lt;/a&gt;. As I read the section about session state, it dawned on me that this feature was exactly what I needed.&lt;/p&gt;

&lt;p&gt;Streamlit applications re-run themselves from the top every time some input (like a button, or checkbox, or menu) changes. I knew about this behavior, but hadn't experienced any issues with it until attempting to implement demo mode. When I looked carefully at my code, I found the source of the bug: I had conditional logic checking to see if the Demo button had been clicked. Naturally, if some other input had changed, the app would re-run, but within that subsequent run, the button would not have been clicked and the app would not load the demo data. Therefore, I needed a way for the app to know if the demo mode had been activated that persisted between runs.&lt;/p&gt;

&lt;p&gt;I created a &lt;code&gt;demomode&lt;/code&gt; variable and set it initially to &lt;code&gt;False&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if 'demomode' not in st.session_state:
    st.session_state.demomode = False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clicking the Demo button changes this variable's value to &lt;code&gt;True&lt;/code&gt;, which then persists between runs as long as the application's browser tab remains open.  The app's conditionals reference this persistent variable to determine if the demo telemetry should be read or loaded from the cache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if not uploaded_file and not st.session_state.demomode:
    # display initial markdown with text and images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if st.session_state.demomode or (uploaded_file is not None):
    # read and/or load data from the appropriate source and
    # call the visualization functions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the application checking session state to see if demo mode has been activated, the other sidebar inputs to manipulate the visualizations work as expected!&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;Some lessons learned from implementing a demo mode in Skyboy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the benefit of reading and re-reading documentation;&lt;/li&gt;
&lt;li&gt;the value of being able to persistently store information in an application's state;&lt;/li&gt;
&lt;li&gt;and the power of walking away and creating processing time when faced with a difficult challenge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out the Skyboy GitHub repository &lt;a href="https://github.com/aeversme/skyboy-app"&gt;here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>streamlit</category>
      <category>python</category>
      <category>aws</category>
    </item>
    <item>
      <title>CI/CD: Branch-based Terraform Deployment</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Thu, 21 Apr 2022 03:08:43 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/cicd-branch-based-terraform-deployment-m5e</link>
      <guid>https://dev.to/alexeversmeyer/cicd-branch-based-terraform-deployment-m5e</guid>
      <description>&lt;p&gt;For my &lt;a href="https://dev.to/alexeversmeyer/introducing-the-skyboy-app-3h22"&gt;Skyboy&lt;/a&gt; project, I chose to use Terraform to provision the application's infrastructure on Amazon Web Services (AWS), both because Terraform is already familiar to me and because I wanted to practice coding a more complex modular configuration. This decision led to several challenges and lots of good learning!&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Modules
&lt;/h2&gt;

&lt;p&gt;With a very simple set of resources, it might be appropriate to limit a Terraform configuration to one directory and the usual set of files (&lt;code&gt;main.tf&lt;/code&gt;, &lt;code&gt;providers.tf&lt;/code&gt;, &lt;code&gt;variables.tf&lt;/code&gt;, and so on). This project, however, would require several different categories of resources: a VPC; an ECS cluster, service, and task definition; some IAM roles and permissions; and a load balancer.&lt;/p&gt;

&lt;p&gt;I broke up these categories into a directory structure like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform/
  - containers/
      - task-definitions/
      - main.tf
      - ...
  - iam/
      - main.tf
      - ...
  - loadbalancing/
      - main.tf
      - ...
  - vpc/
      - main.tf
      - ...
  - main.tf
  - providers.tf
  - ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where the &lt;code&gt;...&lt;/code&gt; represents the other files needed within each module (&lt;code&gt;variables.tf&lt;/code&gt; and/or &lt;code&gt;outputs.tf&lt;/code&gt;, among others).&lt;/p&gt;

&lt;p&gt;To keep myself from getting too confused as my configuration grew, I added a comment at the top of every Terraform file, such as &lt;code&gt;loadbalancing/main.tf&lt;/code&gt;, with the path and file name.&lt;/p&gt;

&lt;p&gt;The VPC and IAM modules were straightforward and didn't require many inputs or variables. Things got more interesting as I started setting up my load balancer and ECS resources. These modules needed certain pieces of information from other modules - for example, the load balancer has to know about the VPC subnets, and the ECS task definition looks for the ARN of its IAM task and execution role(s).&lt;/p&gt;

&lt;p&gt;Setting an output for subnet IDs in the VPC module's &lt;code&gt;outputs.tf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "lb_subnets" {
  value = [for subnet in aws_subnet.skyboy_public_subnet : subnet.id]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;allows the list of subnet IDs to be passed to the Containers module in the root &lt;code&gt;main.tf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "containers" {
  service_subnets = module.vpc.lb_subnets
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which then gets passed to an ECS service within the Containers module in the &lt;code&gt;main.tf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_service" "skyboy_service" {
  network_configuration {
    subnets = var.service_subnets
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;with the additional requirement that &lt;code&gt;var.service_subnets&lt;/code&gt; is defined within the &lt;code&gt;variables.tf&lt;/code&gt; file in the Containers module as well. It can get a little tricky to keep track of what's been defined in which files; thankfully, my IDE of choice for this project (PyCharm) has a great Terraform plugin that detects the presence or absence of variable definitions between files and modules, which helped to keep things straight.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Considerations
&lt;/h2&gt;

&lt;p&gt;As I was preparing to deploy my project, I created an AWS organization that oversees a development account and a production account. That meant I would need to figure out how to deploy the Terraform configuration to the appropriate account so that, once I had infrastructure spun up in production, I could spin up a new stack to test changes and not worry about any conflicts that might take the application down.&lt;/p&gt;

&lt;p&gt;Problems to solve included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;storing dev and prod state files in separate locations;&lt;/li&gt;
&lt;li&gt;using the correct AWS account credentials;&lt;/li&gt;
&lt;li&gt;having a way to easily tear down provisioned infrastructure;&lt;/li&gt;
&lt;li&gt;passing the correct Docker image URI to Terraform;&lt;/li&gt;
&lt;li&gt;and creating the correct load balancer listeners.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(The dev account does not use a Route 53 Hosted Zone with a registered domain for DNS routing to the load balancer, so that account only needs a listener on port 80; making an HTTP request to the load balancer endpoint is sufficient to ensure the infrastructure is set up correctly. The prod account, on the other hand, needs two listeners: one to redirect HTTP traffic on port 80 to HTTPS on port 443, and another to forward HTTPS traffic to the load balancer target group. Requests to the application's domain can verify that the domain's certificate is valid and then trigger the application to launch.)&lt;/p&gt;

&lt;p&gt;The final consideration was that I wanted to do all of this with as little code repetition as possible.&lt;/p&gt;

&lt;p&gt;Since I had already set up a reusable GitHub Actions workflow for building and pushing the application image, I chose to stay consistent and do the same for Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Branch-based Actions
&lt;/h2&gt;

&lt;p&gt;I created three YAML files in the repository's &lt;code&gt;.github/workflows&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apply_terraform.yml
dev_apply_tf.yml
main_apply_tf.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first file, &lt;code&gt;apply_terraform.yml&lt;/code&gt;, is the reusable workflow. In the &lt;code&gt;on:&lt;/code&gt; section, which defines the workflow's trigger(s), instead of a git action (push, pull_request, etc.), I used &lt;code&gt;workflow_call&lt;/code&gt;, which indicates that this workflow can be called by another workflow. Within &lt;code&gt;workflow_call&lt;/code&gt;, I defined &lt;code&gt;inputs&lt;/code&gt; and &lt;code&gt;secrets&lt;/code&gt; that would be passed into this workflow at calling time.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;jobs:&lt;/code&gt; section looks like any other GitHub Actions workflow, with one exception: where repository secrets might otherwise be called, the code instead references the secrets that are passed in via the &lt;code&gt;workflow_call&lt;/code&gt;. At one point, this led to several minutes of frustration as I attempted to pass a Terraform Cloud token directly into the reusable workflow but kept getting errors and aborted workflow runs. The solution, oddly, was to call the repository secret in the branch-based workflow and pass it into the reusable workflow.&lt;/p&gt;

&lt;p&gt;The two branch-based workflows are identical in structure and are both quite short (as workflows go):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Call apply-terrafom from dev branch

on:
  push:
    branches:
      - dev
    paths:
      - 'terraform/**'

jobs:
  apply-tf:
    uses: ./.github/workflows/apply_terraform.yml
    with:
      workspace_name: 'skyboy-dev'
      listeners: 'devlisteners'
    secrets:
      image_uri: ${{ secrets.DEV_IMAGE_URI }}
      tf_token: ${{ secrets.TERRAFORM_TOKEN }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The workflow is triggered by a push, in this case to the &lt;code&gt;dev&lt;/code&gt; branch - and also, only if changes are made within the &lt;code&gt;terraform/&lt;/code&gt; directory of the repository; I don't want changes to the application itself, which is in the same repository, to trigger Terraform runs. (See the Wrap-up for more thoughts on this.)&lt;/p&gt;

&lt;p&gt;The single workflow job uses the reusable workflow, and passes in certain inputs and repository secrets defined in the GitHub web console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving Those Problems
&lt;/h2&gt;

&lt;p&gt;So, how does all that help me solve my multi-account deployment problems?&lt;/p&gt;

&lt;p&gt;After checking out the repository's code, the next step in the reusable workflow is to run a short bash script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run tf_files script
  env:
   WORKSPACE: ${{ inputs.workspace_name }}
   IMAGE: ${{ secrets.image_uri }}
   LISTENERS: ${{ inputs.listeners }}
  run: ../.github/scripts/tf_files.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script can easily access the environment variables that this step sets up, and it performs some basic file manipulations using templates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sets the Terraform Cloud workspace name in &lt;code&gt;backends.tf&lt;/code&gt;, so that the dev and main branch states are stored separately. Every &lt;code&gt;terraform apply&lt;/code&gt; happens remotely on Terraform Cloud, giving me the opportunity to store credentials within each workspace, and to tear down the deployed infrastructure easily;&lt;/li&gt;
&lt;li&gt;inserts the correct Docker image URI from the correct Elastic Container Repository into a task definition;&lt;/li&gt;
&lt;li&gt;and appends the correct listener(s) to the load balancing module's `main.tf' file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the script has completed on the GitHub runner, the workflow logs in to Terraform Cloud, runs &lt;code&gt;terraform init&lt;/code&gt;, &lt;code&gt;fmt&lt;/code&gt;, and &lt;code&gt;validate&lt;/code&gt;, and finally &lt;code&gt;apply&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;Addressing the path-based push trigger: why not separate the application and infrastructure into separate repositories? Answer: because that would be too easy! I recognize that having the two in the same repository might not be a best practice, and if the application grows, I may separate them out. The current setup does allow me to keep the entire project in one IDE window, and makes it easier for anyone interested to see all the work that has gone into launching the Skyboy app.&lt;/p&gt;

&lt;p&gt;I'm pleased that I was able to set up a modular Terraform configuration for my app. Despite seeming simple in retrospect, adding a script and performing file manipulations during the GitHub Action workflow was another good complexity challenge to overcome, and will still be applicable if I break the infrastructure off into a separate repository.&lt;/p&gt;

&lt;p&gt;This write-up is only intended to convey my thought process and an outline of my solutions, and doesn't present enough detail to function as a guided tutorial. Feel free to get in touch if you're attempting something similar and would like clarification about anything I did. I'll do my best to help!&lt;/p&gt;

&lt;p&gt;Related articles:&lt;br&gt;
&lt;a href="https://dev.to/alexeversmeyer/introducing-the-skyboy-app-3h22"&gt;Introducing the Skyboy App&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/alexeversmeyer/cicd-github-actions-with-docker-and-amazon-ecr-nal"&gt;CI/CD: GitHub Actions with Docker and Amazon ECR&lt;/a&gt;&lt;br&gt;
&lt;a href="https://spacelift.io/blog/what-are-terraform-modules-and-how-do-they-work"&gt;What Are Terraform Modules and How to Use Them: Tutorial&lt;/a&gt; on the spacelift.io blog&lt;/p&gt;

</description>
      <category>github</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Design Narrative: Event-Driven Python on AWS</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Mon, 18 Apr 2022 17:26:10 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/design-narrative-event-driven-python-on-aws-bm3</link>
      <guid>https://dev.to/alexeversmeyer/design-narrative-event-driven-python-on-aws-bm3</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally posted on my self-hosted blog on August 10, 2021.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Goal
&lt;/h2&gt;

&lt;p&gt;"Automate an ETL processing pipeline for COVID-19 data using Python and cloud services": &lt;a href="https://acloudguru.com/blog/engineering/cloudguruchallenge-python-aws-etl"&gt;#CloudGuruChallenge – Event-Driven Python on AWS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I saw this challenge when it was first posted in September 2020, but my Python and AWS skills at the time were not nearly good enough to tackle it. Fast-forward ten months and I was finally ready to give it a shot.&lt;/p&gt;

&lt;p&gt;The idea is simple: download some data, transform and merge it, load it into a database, and create some sort of visualization for it. In practice, of course, there were lots of choices to make and plenty of new things I needed to learn to be successful.&lt;/p&gt;

&lt;h2&gt;
  
  
  First things first: Python
&lt;/h2&gt;

&lt;p&gt;The data sources are .csv files, updated daily, from the New York Times and Johns Hopkins University, and both are published on GitHub. I started by downloading the raw files locally, extracting them into dataframes with Pandas, and creating a separate module that would do the work of transforming and merging the data. For my local script, I created a container class to act as a database, into which I could write each row of the resulting dataframe. This allowed me to figure out the necessary logic to determine if there was data in the 'database' or not, and therefore whether to write in the entire dataset or just load any new data that wasn't already there.&lt;/p&gt;

&lt;p&gt;Along the way, I worked through my first major learning objective of this challenge: unit testing. Somewhat surprisingly, the online bootcamp I took during the winter didn't teach code testing at all, and I was intimidated by the idea. After some research, I chose to go with pytest for its simplicity and easy syntax relative to Python's built-in unittest. With a little experimentation, I was able to write some tests for many of the functions I had written, and even dabbled a bit with some test-first development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decisions, decisions...
&lt;/h2&gt;

&lt;p&gt;Once my Python function was working locally, I had to decide which step to take next, as there were a couple choices. After some thinking, and discussing my ideas with my mentor, I went with my second learning objective: Terraform. I've worked a little with Infrastructure as Code in the form of AWS CloudFormation and the AWS Serverless Application Model, but I'd been meaning to try the provider-agnostic Terraform for several months.&lt;/p&gt;

&lt;p&gt;I started a separate Pycharm project, wrote a quick little Lambda function handler, and dove into the Terraform tutorials. Once I got the hang of the basics, I found a Terraform Lambda module and started plugging my own values into the template. A sticking point here was figuring out how to get Pandas to operate as a Lambda Layer - after failing to correctly build a layer myself (than you, Windows), I found a prebuilt layer that worked perfectly and added it to my Terraform configuration as an S3 upload.&lt;/p&gt;

&lt;p&gt;I proved that Terraform worked when deploying locally, and then turned my attention to setting up a GitHub Action for automatic deployment. I combined pytest and Terraform into one workflow, with Terraform being dependent upon all tests passing, so that I had a full CI/CD pipeline from my local computer to GitHub and on to AWS via Terraform Cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting to come together
&lt;/h2&gt;

&lt;p&gt;With deployment just a &lt;code&gt;git push&lt;/code&gt; away, it was time to start utilizing other AWS resources. This brought me to my third big learning objective: boto3. I recall being a bit overwhelmed by boto3 and its documentation last fall when I was working on the Resume Challenge. Fortunately, lots of practice reading documentation in the intervening months paid off, as it wasn't nearly as scary as I'd feared once I actually got started. I added SNS functionality first, so that I would get an email any time the database was updated or an error occurred. With that working nicely, it was time for another decision: what database to use?&lt;/p&gt;

&lt;p&gt;I used DynamoDB for the Resume Challenge, but that was just one cell being atomically incremented. Much of my database experience since then has been with various RDS instances, so I wanted to gain some more experience with AWS's serverless NoSQL option. Back to the documentation I went, as well as to Google to figure out the best way to overcome the batch-writing limits. Before long, my Lambda function was behaving exactly how I wanted, with everything still being deployed by Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finishing touches
&lt;/h2&gt;

&lt;p&gt;At this point, I was cruising along and it was a simple matter to create an Event Bridge scheduled event to trigger my Lambda function once a day. It took a few tries to get the permissions and attachments set up correctly in Terraform, and once that was completed, I had to figure out the data visualization solution. I could have gone with AWS Quicksight, but I explored a bit and settled on using a self-hosted instance of Redash. Since there was already an EC2 AMI with Redash installed, I was able to add that to my Terraform configuration (although I cheated a wee bit and created a security group and IAM role for the instance in the console, in the name of finally finishing this project).&lt;/p&gt;

&lt;p&gt;With Redash up and running, and some simple visualizations scheduled to update daily, I reached the end of the project requirements earlier today. Huzzah!&lt;/p&gt;

&lt;h2&gt;
  
  
  Room for growth
&lt;/h2&gt;

&lt;p&gt;I'm happy with how this project went. I invested nearly 50 hours of time to get it going, due to the number of topics I had to teach myself along the way - a hefty but worthwhile time commitment over the past two weeks. A few things I think could/would get better with more learning and practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I suspect my Terraform configuration is a little rough around the edges, and could probably be refactored a bit.&lt;/li&gt;
&lt;li&gt;Because so many things were new to me, I spent a lot of time in the console, manually coding and testing functionality in an account separate from the one I used for the finished product. It struck me, after almost everything was done, that this would have been an opportunity to learn more about using environment variables to create development and production stages, perhaps. I'm not sure if that would have been useful for this application, or if using two accounts was the most sensible way to go about this, but my workflow felt a bit kludgy to me.&lt;/li&gt;
&lt;li&gt;I spent a solid three hours rewriting my Terraform script because of what turned out to be an IAM permission scoping issue - yikes! I ended up going back to the Terraform configuration I had already been using, albeit with the right IAM permissions, because the module I was using for Lambda was more efficient at packaging code than Terraform's native config.&lt;/li&gt;
&lt;li&gt;My mentor and I worked through a lot of the Python together, and I found myself getting frustrated at my very basic understanding of object-oriented programming. While I didn't end up using any of my own classes in the final product, I can see that's a subject I should spend more time learning.&lt;/li&gt;
&lt;li&gt;It might have been nice to figure out some more complex visualizations, such as daily changes, but I wasn't sure how to go about that. I suspect my choice of querying my DynamoDB table directly from Redash, as opposed to porting the data to S3 for consumption by Athena or some other service, may have played a role in how complex I could get.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Aaaaaand done
&lt;/h2&gt;

&lt;p&gt;Many long nights and many more rabbit holes later, I can finally present my finished product!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aeversme/cloud-challenge-python-etl"&gt;Click here&lt;/a&gt; for the Github repository, and &lt;a href="http://ec2-54-197-44-253.compute-1.amazonaws.com/public/dashboards/je73Y3VVlNgkhhQzjEd0xo4wOI2rQwFINT4DhQh2?org_slug=default"&gt;click here&lt;/a&gt; for the dashboard.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>terraform</category>
      <category>github</category>
    </item>
    <item>
      <title>Design Narrative: Cloud Resume Challenge</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Mon, 18 Apr 2022 17:19:53 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/design-narrative-cloud-resume-challenge-3e34</link>
      <guid>https://dev.to/alexeversmeyer/design-narrative-cloud-resume-challenge-3e34</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally posted on my self-hosted blog on April 13, 2021.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Back in October of last year (2020), I studied for and earned the AWS Solutions Architect Associate certification. The next day, I thought, "Now what?" I did a little poking around online and discovered Forrest Brazeal's Cloud Résumé Challenge. By then, I had missed out on the code review, but I decided that the challenge would be a great exercise anyway, both to gain experience and expose my knowledge gaps. I was right about both!&lt;/p&gt;

&lt;h2&gt;
  
  
  Frontend
&lt;/h2&gt;

&lt;p&gt;Coding the frontend of the resume from scratch, given my long hiatus from web development at that point, seemed like an enormous task, and my perspective was that my time would be better spent working on the architecture-related steps. I found a freely-available template and modified it to suit my taste and experience. The template is written in HTML and CSS and proved to be easy to work with. It also includes a snippet to JavaScript to make the dynamic visitor counter function.&lt;/p&gt;

&lt;p&gt;Once my draft of the resume was completed, I uploaded it manually to an S3 bucket. I then set up a CloudFront distribution for fast content delivery with a certificate for HTTPS security, registered a domain in Route53, and pointed the resume's subdomain at the CloudFront distribution. So far, so good.&lt;/p&gt;

&lt;p&gt;A brand new concept for me was the continuous integration/continuous development workflow. Since my site's version control was being handled by Git and GitHub, it was a matter of finding a GitHub Actions workflow that uploaded my code to S3 and also invalidated the CloudFront distribution every time I pushed an update so that the latest copy of my site would always be visible. I set up some Secrets for my AWS resource names and credentials, added the workflow to my next commit, and watched as the whole thing failed! The issue was with the bucket name I had stored, and once it was entered properly, the workflow succeeded and the frontend was basically complete (other than content editing).&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend
&lt;/h2&gt;

&lt;p&gt;The backend consists of an AWS Lambda function that is triggered by the JavaScript code to communicate with a DynamoDB table via AWS's API Gateway. At this point, I hadn't learned much code at all yet, nor had I worked with any of these services in any appreciable way, so the struggle became real very quickly. With the assistance of a friend with a lot of coding experience, I did attempt to read the Lambda and boto3 documentation and fudge my way through some code. It was soon apparent that I wasn't able to handle the code on my own, however, and I had to start reviewing other people's code to figure out a solution.&lt;/p&gt;

&lt;p&gt;I eventually came up with something that I thought might work, so I set up a DynamoDB table, an API Gateway, and the Lambda code. However, I was unable to get everything to talk to each other, largely due to misconfiguration of the API (I think). At this point, I was feeling pretty frustrated with my lack of knowledge and inability to figure these challenges out despite hours of internet searches and reading. I didn't want to outright 'cheat' by simply copying someone's repository, but I was at a loss for what to do, and took a day or two off to let my brain reset.&lt;/p&gt;

&lt;p&gt;Refreshed, I decided to look into another requirement of the project: using the Serverless Application Model to provision my backend resources as code. I found a marvelous blog post outlining a similar project, downloaded the AWS SAM CLI, and gave it a whirl. Lo and behold, it was like magic: Amazon did all the work of configuring resources to work with each other, and I had the ability to test my code. It took a few more hours (and some mind-melting frustration) to get my Lambda code to atomically increment a value in my database, but some methodical experimentation - a skill I improved later while formally learning Python over the winter - led to working code.&lt;/p&gt;

&lt;p&gt;Another GitHub Action workflow, this time triggering the SAM CLI to build and deploy my code, was set up in the backend repository, and I now had fulfilled (almost) all of the project requirements. One thing I did not get to was unit-testing my Python code. Learning unit-testing is on my shortlist of future projects but hasn't happened yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;I finally reached a point where I'm ready to tidy up the content of my resume, as part of a bigger push to prepare myself for the impending job search. This blog post is the other as-yet incomplete part of the project, but now it and my resume are live and ready for prime time! I'd like to think this challenge would have gone quite differently now than six months ago, since I have much more Python experience and more time playing with AWS resources. Nevertheless, it was incredibly instructive at the time and helped set my path forward as I continue learning and progressing towards a new career.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://resume.alexeversmeyer.com"&gt;Click here&lt;/a&gt; to see my resume!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aeversme/resumestatic"&gt;Frontend GitHub repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aeversme/resumecode"&gt;Backend GitHub repository&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudresumechallenge</category>
      <category>python</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Introducing the Skyboy App</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Sun, 10 Apr 2022 23:48:27 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/introducing-the-skyboy-app-3h22</link>
      <guid>https://dev.to/alexeversmeyer/introducing-the-skyboy-app-3h22</guid>
      <description>&lt;h2&gt;
  
  
  Skyboy is live! 🎉
&lt;/h2&gt;

&lt;p&gt;It is with a great deal of satisfaction that I can introduce my latest coding/cloud/hobby project, the &lt;a href="https://skyboy.app"&gt;Skyboy App&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Skyboy is an FPV quadcopter post-flight telemetry visualization tool, providing key flight metrics, a map for GPS-equipped quads, and graphs for analyzing related subsets of data. It brings together my love for flying FPV quadcopters with my passion for coding and cloud infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is FPV? 🤔
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OZE4PblX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d32nom6yu78fqfkubqle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OZE4PblX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d32nom6yu78fqfkubqle.png" alt="My first FPV quad" width="880" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FPV stands for First-Person View, and in the context of "drones," is shorthand for referring to a certain type of quadcopter (a.k.a. "quad") that has a live video link between a camera on the quad and goggles worn by the pilot. Quad operators can be hobbyists, content creators, or paid professionals filming for commercials, television, and movies. Among the various styles of FPV flying are racing, freestyle, cinematic, and long range; regardless of how they're flying, the pilot gets a quad's eye view of the action and is typically in full control of the aircraft's movement. This is in contrast to how, for example, a commercially-produced DJI drone flies (slow and steady, or autonomously along a pre-defined flight path).&lt;/p&gt;

&lt;p&gt;The flight controller board on the quad can be configured to send telemetry data about the quad and its systems to the pilot's handheld radio transmitter, where the data can be logged for future reference. I initially set up telemetry logging on my radio to aid in recovery if my quad crashes. After downloading my first few flight logs, I started thinking about ways to use the captured data. I looked at several existing visualization and dashboard applications, but they were all geared towards DJI drones. So... what if I made something myself?&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Visualization with Streamlit 📊
&lt;/h2&gt;

&lt;p&gt;A search for data visualization tools led me to &lt;a href="https://streamlit.io/"&gt;Streamlit&lt;/a&gt;, a Python browser-based application with the flexibility to display data in multiple ways. It supports multiple Python graphing modules, as well as mapping with &lt;a href="https://www.mapbox.com/"&gt;Mapbox&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qWsNhktS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lnbs96nhef24j07f8gt5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qWsNhktS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lnbs96nhef24j07f8gt5.jpg" alt="Charts and sidebar in the Skyboy app" width="880" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I began by taking the imported telemetry log - a comma-separated values file - and transforming it using Pandas. With the data loaded into a dataframe, I built multiple plotly charts with selected columns. The GPS data was manipulated and loaded into a layer on top of satellite imagery to produce a traced flight path in an interactive map widget. I also extracted and calculated key flight metrics, such as time, distance, and battery consumption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gkO3k8St--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nkszfw6p141qqm9hphmf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gkO3k8St--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nkszfw6p141qqm9hphmf.jpg" alt="Metrics and map in the Skyboy app" width="880" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the app was running from an IDE, it was time to decide how best to package it for cloud-based deployment. I settled on creating a Docker container for the application so that it could be run in a browser on any platform. Building the image went fairly smoothly, considering it was my first Docker project (after guided lessons and tutorials). I automated the build and repository upload process using GitHub actions, which I wrote about in &lt;a href="https://dev.to/alexeversmeyer/cicd-github-actions-with-docker-and-amazon-ecr-nal"&gt;this blog post&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment on AWS with Terraform 📡
&lt;/h2&gt;

&lt;p&gt;The final step to this initial product rollout was getting the application deployed to the cloud. I started by manually deploying the necessary infrastructure: a Virtual Private Cloud (VPC) environment; an Elastic Container Service (ECS) cluster, service, and task definition; a load balancer; and associated security groups and service roles. I also registered a domain and created a Route 53 hosted zone for DNS routing.&lt;/p&gt;

&lt;p&gt;This was my first time working with AWS's container orchestration services, so I kept it simple by using a serverless Fargate deployment (meaning no instances that I have to manage). Getting everything set up was fairly straightforward, with a big shout-out to &lt;a href="https://m-germanengineer.medium.com/tutorial-launch-saleable-streamlit-dashboards-aws-part-0-ba7098cc1c40"&gt;this tutorial series&lt;/a&gt; that I used as a guide. The manual deployment configuration became my reference as I started to build out the Terraform code to automate my infrastructure.&lt;/p&gt;

&lt;p&gt;I plan to write a post about the Terraform development process shortly, so I won't go into much detail here, except to say that this was a more complex project than I've previously tackled on my own. There was a lot to figure out on the Terraform side, as well as on the GitHub Actions side to create another set of branch-dependent workflows. After quite a bit of work - and a few this-is-driving-me-crazy moments - I tore down the manual deployment and my Terraform configuration spun up all of the necessary resources in the Skyboy production account to successfully redeploy the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  MVP 🏆
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://skyboy.app"&gt;Skyboy&lt;/a&gt; is, at this point, an MVP (minimally viable product). It currently only handles telemetry logs with one specific set of headers. After changing some of the hardware on my latest quad build, I've found that some of the telemetry has either changed or is omitted from the log file. This throws errors in the application, and handling those errors to accommodate a variety of data sets is next on the roadmap.&lt;/p&gt;

&lt;p&gt;Other to-do items include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication&lt;/li&gt;
&lt;li&gt;Storing telemetry logs on S3 instead of in memory&lt;/li&gt;
&lt;li&gt;playing with mapping options&lt;/li&gt;
&lt;li&gt;layout and aesthetics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have a lot more learning to do about cloud container orchestration with ECS and monitoring with one of the many available options (Datadog, perhaps). The process of developing and deploying my first home-grown application has been humbling and thrilling, and I am excited to see where I can take this project!&lt;/p&gt;

&lt;p&gt;All code (Python, Terraform, and GitHub Actions workflows) can be found in the &lt;a href="https://github.com/aeversme/skyboy-app"&gt;Skyboy GitHub repository&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>python</category>
      <category>aws</category>
      <category>terraform</category>
      <category>github</category>
    </item>
    <item>
      <title>CI/CD: GitHub Actions with Docker and Amazon ECR</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Mon, 14 Mar 2022 03:36:56 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/cicd-github-actions-with-docker-and-amazon-ecr-nal</link>
      <guid>https://dev.to/alexeversmeyer/cicd-github-actions-with-docker-and-amazon-ecr-nal</guid>
      <description>&lt;h2&gt;
  
  
  The Goal 🥅
&lt;/h2&gt;

&lt;p&gt;Over the past month, I developed a containerized Streamlit webapp in Python that I then deployed manually to AWS. With a proof-of-concept in place, it is time to start automating the testing, building, and deployment of my application and its infrastructure.&lt;/p&gt;

&lt;p&gt;The goal of this automation step is to push a new container image to an Amazon Elastic Container Registry (ECR) repository whenever changes to the application files are committed and pushed to GitHub. There's a lot of new stuff to learn in this project, so I stuck with a familiar CI/CD platform: GitHub Actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Planning 📝
&lt;/h2&gt;

&lt;p&gt;From the manual deployment, I knew I would need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;login to Amazon ECR;&lt;/li&gt;
&lt;li&gt;build the application image from the &lt;code&gt;./app&lt;/code&gt; directory;&lt;/li&gt;
&lt;li&gt;tag the image with the correct registry and repository;&lt;/li&gt;
&lt;li&gt;and push the image so that it can be used in a cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I started by logging into the AWS console and creating a public ECR repository in my dev account. (The POC is deployed in my production account.) This choice - a public repo vs. a private repo - will become important soon.&lt;/p&gt;

&lt;p&gt;I also created a 'dev' branch on my git repository, so that I could push and test the workflow without committing code to the 'main' branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking (ok, finding) Action(s) 🎯
&lt;/h2&gt;

&lt;p&gt;While it's possible to simply run bash commands on the virtual machine that is provisioned for an Action workflow run, there are a lot of community-authored actions available that abstract away complex API calls and simplify automated tasks. My first stop was the GitHub Marketplace to find actions related to Docker and ECR.&lt;/p&gt;

&lt;p&gt;After getting a handle on my (many) options, I searched for and found a couple blog posts about this process, to see how other people had set up their workflows. I took some notes, discovered a few features I wanted to incorporate into my process, and read some documentation. The heavy lifting would be done with the &lt;code&gt;docker/build-push-action&lt;/code&gt; which has a lot of features and builds images with Docker's Buildx. Down the rabbit hole I go...&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication and Authorization ✔️
&lt;/h2&gt;

&lt;p&gt;There are several ways to authenticate to AWS. The simplest of these is to provide an ID and a Secret Key for programmatic CLI access. The challenges to this method are safeguarding the AWS credentials and ensuring least privileged access to the entity utilizing those credentials. I discovered an alternative way to grant access to AWS resources: assuming an IAM role using GitHub's Open ID Connect (OIDC) provider.&lt;/p&gt;

&lt;p&gt;Setting up the provider in the IAM console was not difficult. I did spend quite a few minutes figuring out how to get OpenSSL to output the thumbprint of GitHub's security certificate, since it was recommended to validate the thumbprint that the IAM console calculated. Having accomplished that, I created a role and attached a permissions policy and trust policy that limited that role's access. With the role's ARN entered as a GitHub repository secret, I was able to add a credentials block to my growing YAML workflow configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Configure dev AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        if: github.ref == 'refs/heads/dev'
        with:
          role-to-assume: ${{ secrets.DEV_ROLE_ARN }}
          aws-region: ${{ secrets.AWS_REGION }}
          role-session-name: DevSession

      - name: Configure prod AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        if: github.ref == 'refs/heads/main'
        with:
          role-to-assume: ${{ secrets.PROD_ROLE_ARN }}
          aws-region: ${{ secrets.AWS_REGION }}
          role-session-name: ProdSession
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I used the &lt;code&gt;if&lt;/code&gt; conditional in the hopes of setting up this workflow to run on both the 'dev' and 'main' branches of my repo. While the condition works, I ended up not using these actions because of the second step in the authorization process: logging into the ECR repository.&lt;/p&gt;

&lt;p&gt;The action I initially chose for this seemed straightforward enough: without having to enter any inputs, it would take the credentials passed back from the OIDC provider to grant access to the desired repo. However, after assembling the rest of the workflow and having a few runs fail, I dug into this action's open issues and discovered that it appears to only be authored to access private ECR repos. 🤦&lt;/p&gt;

&lt;p&gt;No matter: there are other actions to log into container image repositories. I followed a suggestion and looked at Docker's login action, which presented options for both private and public ECR repos. Unfortunately, the public repo option does not make use of the &lt;code&gt;configure-aws-credentials&lt;/code&gt; action, which meant that - for now - the work I did to set up OIDC was for naught. I created an IAM user with limited permissions in my dev account, passed the credentials into GitHub Secrets, and I was almost out of the woods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Login to Amazon Public ECR
        uses: docker/login-action@v1
        with:
          registry: public.ecr.aws
          username: ${{ secrets.DEV_ACCESS_KEY_ID }}
          password: ${{ secrets.DEV_SECRET_ACCESS_KEY }}
#        env:
#          AWS_REGION: us-east-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As it turned out, this login action didn't work with the region input active; with no region specified, login worked and I could move on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets Are Hard 🥵
&lt;/h2&gt;

&lt;p&gt;I ran up against a seemingly intractable problem as I was going over the image build process: my application depends on an API token being passed in via a .toml file within the app's directory structure. The .toml language, which is a human-readable way to define configuration, has no way to access environment variables. I didn't want to commit the config file with my API token hard-coded to GitHub, and after more than an hour of research, I was at a loss for how to insert that value appropriately before building the image.&lt;/p&gt;

&lt;p&gt;After sleeping on this problem, I came up with a simple solution that keeps the API key protected. In the same directory as the &lt;code&gt;config.toml&lt;/code&gt; file (which does not get committed or pushed, thanks to .gitignore), I created a copy of that file called &lt;code&gt;config.template&lt;/code&gt;. Where the hard-coded token would go, the &lt;code&gt;.template&lt;/code&gt; file reads 'TOKEN_PLACEHOLDER'. I passed the API token to the workflow runner as an environment variable, and use a &lt;code&gt;sed&lt;/code&gt; command to substitute in the token and create &lt;code&gt;config.toml&lt;/code&gt; in the directory structure on the runner before building the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Create config.toml file
        env:
          TOKEN: ${{ secrets.MAPBOX_API_TOKEN }}
        run: sed "s/TOKEN_PLACEHOLDER/$TOKEN/g" ./app/.streamlit/config.template &amp;gt; ./app/.streamlit/config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Almost Done... Don't Forget the Cache! 💵
&lt;/h2&gt;

&lt;p&gt;One of the interesting features of the &lt;code&gt;build-push-image&lt;/code&gt; action is the ability to cache the container layers. GitHub allows up to 10GB of cached data per repository, and persisting the container layers means faster build times after the first run.&lt;/p&gt;

&lt;p&gt;Fortunately, all of the necessary inputs and file paths have already been published by the action's authors, so setting up that process (and moving the cache after the build to keep it from growing to the maximum limit) was an easy addition to the build action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Set up Docker Buildx
        id: buildx
        uses: docker/setup-buildx-action@master

      - name: Cache Docker layers
        uses: actions/cache@v2
        with:
          path: /tmp/.buildx-cache
          key: ${{ runner.os }}-buildx-${{ github.sha }}
          restore-keys: |
            ${{ runner.os }}-buildx-

      - name: Build Docker image
        uses: docker/build-push-action@v2
        with:
          context: ./app
          builder: ${{ steps.buildx.outputs.name }}
          push: true
          tags: ${{ secrets.DEV_ECR_REGISTRY }}/skyboy:latest
          cache-from: type=local,src=/tmp/.buildx-cache
          cache-to: type=local,dest=/tmp/.buildx-cache-new

      - name: Move cache
        run: |
          rm -rf /tmp/.buildx-cache
          mv /tmp/.buildx-cache-new /tmp/.buildx-cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrap-up 🦥
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;(Yeah, it's a sloth. Why not?)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Over the course of this process, the workflow ran ten times, with two successful runs. The first took 2m 37s to complete, and the second - after attempting and failing to re-implement use of the OIDC provider again - took only 1m 7s, proving the benefit of layer caching.&lt;/p&gt;

&lt;p&gt;The final modification I made to my workflow configuration was to modify the trigger:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
      - 'dev'
    paths:
      - 'app/**'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;paths&lt;/code&gt; syntax is a clever way to prevent this workflow from triggering unless changes are pushed to any of the files in the &lt;code&gt;app&lt;/code&gt; directory. I tested this after adding that syntax by editing and pushing the repo's &lt;code&gt;README.md&lt;/code&gt; file. Since the README file is in the root directory, the workflow was not triggered.&lt;/p&gt;

&lt;p&gt;This was quite the journey, sending me down several deep rabbit holes and throwing plenty of errors to troubleshoot. I'd like to figure out how to make this single workflow configuration function on both the 'dev' and 'main' branches; I have a couple ideas to explore in that regard. I would also like to find a way to use the OIDC provider to authenticate to AWS. I imagine there are some other best practices that might be good to implement as well. For now, in the spirit of having an MVP, I'm pleased that this workflow runs successfully!&lt;/p&gt;

&lt;p&gt;Next up: provisioning the webapp's AWS infrastructure with Terraform.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Onward and upward!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>github</category>
      <category>aws</category>
    </item>
    <item>
      <title>State of my Cloud Journey: Feb 5, 2022</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Sun, 06 Feb 2022 02:43:18 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/state-of-my-cloud-journey-feb-5-2022-2ok8</link>
      <guid>https://dev.to/alexeversmeyer/state-of-my-cloud-journey-feb-5-2022-2ok8</guid>
      <description>&lt;h2&gt;
  
  
  So, Hi Again 👋
&lt;/h2&gt;

&lt;p&gt;It's been a little while.&lt;/p&gt;

&lt;p&gt;I haven't finished Advent of Code, and I won't finish any time soon. The last half-dozen unfinished puzzles are difficult. Really difficult. I lost some motivation on my holiday road trip, and lost most of the rest shortly thereafter: having to research things like Set Theory on my own just wasn't appealing.&lt;/p&gt;

&lt;p&gt;The puzzles aren't going anywhere, of course, so when I am more experienced and have had a chance to study more complex algorithms and programming concepts, I can mop up those last pesky stars. What a great experience it was, trying to keep up with each new day's puzzle and tackling problems I had never seen before.&lt;/p&gt;

&lt;h2&gt;
  
  
  And Then, January 🥶
&lt;/h2&gt;

&lt;p&gt;Last month (January) was a bit of a mess. It may have been a reaction to the prior 12+ months of intense studying and learning, or it might have been disappointment from a challenging job search in a difficult market for beginners; or, some of both, plus other stuff. Whatever the causes, I ended up backing away completely from job hunting, coding, cloud, and learning. Instead, I dove deep into my newest (as of last summer) passion: FPV, or first-person view, quadcopters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SfoyuYE0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7czpc8m5fd87zi9vuja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfoyuYE0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7czpc8m5fd87zi9vuja.png" alt='Transformer Mini 4" quadcopter' width="192" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's my current vehicle, although it's about to get some new hardware soon. There were several topics related to FPV quads that I wanted to dig into, and so that's what January ended up being all about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which Leads To... 💡
&lt;/h2&gt;

&lt;p&gt;... a project idea!&lt;/p&gt;

&lt;p&gt;One of the things I learned to do last month was to set up my radio receiver (the big remote controller in my hands) to log telemetry transmitted from the quad during flight. There are some interesting parameters that get measured and sent to the radio, chief among them being GPS coordinates. This got me thinking about how I might be able to display and analyze the information from a flight.&lt;/p&gt;

&lt;p&gt;Enter &lt;a href="https://streamlit.io"&gt;Streamlit&lt;/a&gt;. This python framework looks like a pretty simple way to put data visualization in a browser window. I'm excited to dig into the docs.&lt;/p&gt;

&lt;p&gt;My app is in its infancy (literally, I wrote the first few lines of code before work this morning), but I can't wait to flesh it out and do something interesting with it. I have a few big ideas that might be really cool, if I can figure out how to get them to work. So far:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I7D-e0C1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g1tfc5n8oqj8do021m2a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I7D-e0C1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g1tfc5n8oqj8do021m2a.jpg" alt="skyboy-app001.jpg" width="832" height="804"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up 🍩
&lt;/h2&gt;

&lt;p&gt;I don't know what my next job search steps are. I'm feeling a little gun-shy about applying for jobs, at least until I get some more outside perspectives (which I'm working on). I'm not putting everything on hold, because I really, &lt;em&gt;really&lt;/em&gt; want to get into tech and out of retail. I am, however, trying to be slightly more balanced in my approach. I've been neglecting some personal development things for far too long, and my mental health has suffered enough in the past two years that I recognize the need for more intentional self-care during this process.&lt;/p&gt;

&lt;p&gt;Someday, the right person will notice what I've been up to and will want to talk to me. Not sure how that will happen, but I'm going to keep moving forward in the meantime.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Onward and upward!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>cloud</category>
      <category>learning</category>
      <category>jobsearch</category>
    </item>
    <item>
      <title>Advent of Code, Day 16-19</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Sun, 02 Jan 2022 05:14:38 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/advent-of-code-day-16-19-43a9</link>
      <guid>https://dev.to/alexeversmeyer/advent-of-code-day-16-19-43a9</guid>
      <description>&lt;p&gt;After a bit of a rough holiday season - travel and low motivation led to reduced coding time - I finally managed to solve the hardest of the Advent of Code puzzles so far: Day 19. Day 16 and 18 were pretty gnarly, too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 16 &lt;a href="https://adventofcode.com/2021/day/16"&gt;(puzzle here)&lt;/a&gt;:
&lt;/h3&gt;

&lt;p&gt;This tricky puzzle required parsing a very long hexadecimal packet into its constituent binary sub-packets, interpreting each packet's header bits, and using that information to further (recursively) break down packets to perform mathematical and logical operations. Packets could be an expression of a literal value, or an operator containing sub-packets that could be either literal values or other operator packets.&lt;/p&gt;

&lt;p&gt;My solution &lt;a href="https://github.com/aeversme/adventofcode2021/tree/main/dec16"&gt;for day 16 on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What made this particularly tricky for me was figuring out how to pass values between stack layers in order to keep track of the length and/or count of sub-packets inside each operator packet. Once I managed to pass those values around, the problem became less intimidating and I was able to perform the required operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 18 &lt;a href="https://adventofcode.com/2021/day/18"&gt;(puzzle here)&lt;/a&gt;:
&lt;/h3&gt;

&lt;p&gt;This puzzle really tested my debugging skills, as there were a ton of edge cases to catch and account for. The premise was to add numbers, but each 'number' was actually a pair &lt;code&gt;[x,y]&lt;/code&gt;, where &lt;code&gt;x&lt;/code&gt; and &lt;code&gt;y&lt;/code&gt; are considered 'regular numbers.' Some examples of these 'numbers,' one per line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[1,2]
[[1,2],3]
[9,[8,7]]
[[1,9],[8,5]]
[[[[1,2],[3,4]],[[5,6],[7,8]]],9]
[[[9,[3,8]],[[0,9],6]],[[[3,7],[4,9]],3]]
[[[[1,3],[5,3]],[[1,3],[8,7]]],[[[4,9],[6,9]],[[8,2],[7,3]]]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As 'numbers' are added to each other, there were certain conditions that triggered a reduction, which itself had two possible steps. And, finally, once all of the 'numbers' were added up properly, there was a further calculation done on the resulting 'number' to get its magnitude.&lt;/p&gt;

&lt;p&gt;My solution &lt;a href="https://github.com/aeversme/adventofcode2021/tree/main/dec18"&gt;for day 18 on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I chose to mostly work with these 'numbers' as their string representation, which had upsides and downsides. Some checks, like whether a pair was nested a certain number of levels deep, were a little easier because I was able to keep a running tally of opening and closing brackets (&lt;code&gt;[&lt;/code&gt; and &lt;code&gt;]&lt;/code&gt;). However, performing operations on the regular numbers required type conversions into integers, as well as a number of checks to make sure two-digit numbers were parsed when necessary. There were also some nasty edge cases, like performing an operation on one particular string but leaving a matching string earlier or later in the sequence alone.&lt;/p&gt;

&lt;p&gt;After what felt like dozens of tests, run-throughs, and bugfixes, I did finally manage to arrive at the correct puzzle answer. I thought that was a pretty wild puzzle, until I moved on to...&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 19 &lt;a href="https://adventofcode.com/2021/day/19"&gt;(puzzle here)&lt;/a&gt;:
&lt;/h3&gt;

&lt;p&gt;This puzzle required mapping 3D space, given scanners that didn't know their own position and the beacons scattered through space (in this case, the deep ocean) that each scanner could detect. The key was finding pairs of scanners that had overlapping detection ranges, meaning each scanner in a pair detected 12 or more of the same beacons (albeit from their own orientation, which could have been rotated any number of times by 90 degrees on each axis).&lt;/p&gt;

&lt;p&gt;My solution &lt;a href="https://github.com/aeversme/adventofcode2021/tree/main/dec19"&gt;for day 19 on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I chose to keep track of scanners and beacons as class objects, making it easier to refer to specific properties of each object and to use object methods. The first task was to find a way to determine which scanners shared beacons with other scanners, and to do that, I matched scanners as parent and child (starting with scanner 0, at coordinates [0, 0, 0]). Part of this discovery was making sure no scanner was assigned to more than one parent; while many of the scanners shared beacons with multiple other scanners, limiting which ones were paired meant fewer conversions and comparisons later on.&lt;/p&gt;

&lt;p&gt;One of the toughest parts of this particular puzzle was accounting for scanner axes to be mismatched (one scanner's &lt;code&gt;x&lt;/code&gt; was another scanner's &lt;code&gt;y&lt;/code&gt; or &lt;code&gt;z&lt;/code&gt;), and also accounting for which direction a scanner faced on each of it's axes relative to those of it's parent.&lt;/p&gt;

&lt;p&gt;I came up with multiple versions of code to both try to match scanners with shared beacons, and to determine a consistent way to relate scanners' coordinate systems to each other. For matching, I ended up with a series of nested dictionaries, keeping track of the sums and differences of every beacon's &lt;code&gt;x&lt;/code&gt;, &lt;code&gt;y&lt;/code&gt;, and &lt;code&gt;z&lt;/code&gt; coordinate with those of every beacon from another scanner. The number of times a particular sum or difference showed up determined whether or not there were shared beacons between the two scanners, and my dictionary setup gave me enough information to determine a child scanner's position relative to its parent.&lt;/p&gt;

&lt;p&gt;That same information also lent itself to the creation of a transformation 'matrix' for each scanner, in order to convert the child scanner's beacon coordinates into it's parent scanner's coordinate system. This took the form of&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[2, -1], [0, 1], [1, -1]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where the three pairs related to the &lt;code&gt;x&lt;/code&gt;, &lt;code&gt;y&lt;/code&gt;, and &lt;code&gt;z&lt;/code&gt; coordinates of a beacon, and the values in each pair represented which parent axis that coordinate transformed to, and what orientation conversion to apply. For the 'matrix' above, a child beacon's &lt;code&gt;x&lt;/code&gt; coordinate is on the parent's &lt;code&gt;z&lt;/code&gt; axis, and the child scanner is facing away from the parent scanner on that axis.&lt;/p&gt;

&lt;p&gt;While this seems a little hack-y, I wasn't able to find or fathom a cleaner way to come up with these relationships. I did discover that the SciPy module contains actual matrices for doing transformation and rotation conversions, but applying these was beyond the ability of my tired brain as December wound down. So, I stuck with my simplistic method. A recursive call through the various chains of parent/child scanners gathered all the beacons into one list, converting them along the way so that a single dictionary of unique beacons could be compiled and counted.&lt;/p&gt;

&lt;p&gt;The second part of the puzzle required knowing the absolute positions of all the scanners. With most of the legwork done already, it was fairly straightforward to apply transforms to each scanner's relative position recursively to determine it's absolute position relative to the origin scanner. From there, a simple calculation determined the distance between the two farthest-apart scanners.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Whew!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I know there's one more really tough puzzle coming up on day 23, but finally solving day 19 today has given me extra resolve to tough it out and complete all of the Advent of Code puzzles. Here's hoping it doesn't take much longer!&lt;/p&gt;

</description>
      <category>adventofcode</category>
      <category>python</category>
    </item>
    <item>
      <title>Advent of Code, Day 12-15</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Fri, 17 Dec 2021 05:55:16 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/advent-of-code-day-12-15-3din</link>
      <guid>https://dev.to/alexeversmeyer/advent-of-code-day-12-15-3din</guid>
      <description>&lt;p&gt;The puzzles to start this week have not disappointed! They included two different pathfinding exercises and an interesting string insertion problem.&lt;/p&gt;

&lt;p&gt;The first pathfinding puzzle was quite tricky. Given a set of relationships between big and little 'caves,' the goal was to count the number of paths from start to end that entered small caves only once, and then to also count the paths when allowed to enter one of the small caves twice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sample set of relationships:

start-A
start-b
A-c
A-b
b-d
A-end
b-end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My strategy was to build a dictionary where each key is a cave name ('A', 'c', 'end', etc) and the associated value is a list of possible exits from that cave. I kept track of the path being followed, and after entering a cave, recursively iterated through its list of exits to find valid ones. If a valid exit was found, that cave was 'entered' and its list of exits evaluated. This was a tricky pattern to figure out, particularly with regards to what constituted a valid exit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aeversme/adventofcode2021/tree/main/dec12"&gt;My day 12 code on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other pathfinding exercise (day 15) introduced me to Dijkstra's Algorithm for shortest path finding. Given a grid of integer values, the goal was to come up with the lowest sum of values moving up/down/left/right between the upper left corner (start) and the bottom right corner (finish).&lt;/p&gt;

&lt;p&gt;At first, I was unsure how to go about solving the puzzle. After brainstorming, I decided to make use of Google, and quickly discovered a number of likely algorithm candidates. After reading about a few, it became clear that Dijkstra's Algorithm was likely my best bet.&lt;/p&gt;

&lt;p&gt;My next search was for a clear explanation of the steps to this algorithm. I didn't want to look at pseudocode; rather, I wanted an illustration of how the algorithm works, so I could try to code it myself. I found a great article with some images that explained each step, on which I based my implementation: &lt;a href="https://www.udacity.com/blog/2021/10/implementing-dijkstras-algorithm-in-python.html"&gt;Implementing Dijkstra's Algorith in Python&lt;/a&gt;. Although this article goes on to code the algorithm, I stopped reading halfway through and headed into PyCharm to get coding.&lt;/p&gt;

&lt;p&gt;Because of the number of attributes I wanted to track for each node (location on the grid), I wrote a small class and created an object for each node. I was then able to easily track if the node had been visited, it's value and coordinates, and the sum of the shortest (lowest) path between that node and the start.&lt;/p&gt;

&lt;p&gt;At the start of each step of the algorithm, it is necessary to search for the node with the lowest distance value that hasn't already been visited. For a 100 x 100 grid, this was trivial, but the puzzle's second part expanded the grid to 500 x 500. Since this was too many nodes to search through in their entirety at each step, I chose to add every active node (any node whose distance had been evaluated but which hadn't been visited yet) to a list, and only searched through that list during each step. This list grew to 600-700 nodes during processing, but that's way better than 250,000 searches in each of the 250,000 steps!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aeversme/adventofcode2021/tree/main/dec15"&gt;My day 15 code on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>adventofcode</category>
      <category>python</category>
    </item>
    <item>
      <title>Advent of Code, Days 9-11</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Sun, 12 Dec 2021 03:00:51 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/advent-of-code-days-9-11-5f1i</link>
      <guid>https://dev.to/alexeversmeyer/advent-of-code-days-9-11-5f1i</guid>
      <description>&lt;p&gt;It's been a very interesting few days in the Advent of Code challenge. Day 9 has probably been my favorite so far: the challenge was to find the low points on a heightmap (an array of numbers), and then find the associated basins (defined as any numbers less than 9 connected to the low point, excluding diagonals).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2199943210
3987894921
9856789892
8767896789
9899965678

A sample heightmap.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first part was not particularly difficult. It required coding some logic to check corners and edges, since some adjacent coordinates would be outside the map, as well as iterating through the inner parts of the map to locate the numbers lower than all four other numbers to the left, right, up, and down.&lt;/p&gt;

&lt;p&gt;The second part of the day's puzzle is where things really got fun: discovering the size of each basin connected to a low spot. To do so required a recursive function with a little twist to it (relative to the basic recursion I've used once or twice before). It felt really good to not only work out how to solve this problem, but to also code the solution. I also figured out how to clean up my function to avoid repetition and look clean, too!&lt;/p&gt;

&lt;p&gt;Yesterday's and today's puzzles felt a little simpler, and today's used some code similar to the mapping functions from day 9. The trickiest part for me was working out how to reference all eight locations next to a given coordinate, including diagonals, without referencing the original coordinate.&lt;/p&gt;

&lt;p&gt;The Advent of Code is nearing the halfway point. Looking forward to seeing where the puzzles go from here, both in terms of theme as well as difficulty!&lt;/p&gt;

</description>
      <category>adventofcode</category>
      <category>python</category>
    </item>
    <item>
      <title>Advent of Code, Day 8</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Thu, 09 Dec 2021 03:30:16 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/advent-of-code-day-8-28de</link>
      <guid>https://dev.to/alexeversmeyer/advent-of-code-day-8-28de</guid>
      <description>&lt;p&gt;No update for the past week. It was a slow one, marred by low motivation and energy. I am hoping this week is stronger. In the meantime...&lt;/p&gt;

&lt;p&gt;I &lt;em&gt;have&lt;/em&gt; found motivation in one new "project" that started this week: attempting to complete the entirety of this year's &lt;a href="https://adventofcode.com/"&gt;Advent of Code&lt;/a&gt;. I've taken on this challenge for three reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;refresh my Python skills 🐍&lt;/li&gt;
&lt;li&gt;improve my problem-solving and algorithm-building abilities 🧮&lt;/li&gt;
&lt;li&gt;and have a little fun! 🎄&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'll try to discuss my solutions here every day, if possible. I'm nowhere near fast enough to try to compete for the leaderboard; this is all learning and enjoying the challenge of solving challenges with code.&lt;/p&gt;

&lt;p&gt;Today's challenge is &lt;a href="https://adventofcode.com/2021/day/8"&gt;here&lt;/a&gt;. While the first part was quick and easy, the second part proved to be much more tricky. Essentially, given a sequence of letters (input) that correspond to different segments of a seven-segment display (and are not in order), I had to determine which letter went to which segment in order to decode the four-digit output for each input.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Each entry consists of ten unique signal patterns, a | delimiter, and finally the four digit output value. Within an entry, the same wire/segment connections are used (but you don't know what the connections actually are). The unique signal patterns correspond to the ten different ways the submarine tries to render a digit using the current wire/segment connections.&lt;/p&gt;

&lt;p&gt;be cfbegad cbdgef fgaecd cgeb fdcge agebfd fecdb fabcd edb | fdgacbe cefdb cefbgd gcbe&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Fortunately, there was enough context to make this a logic puzzle. I came up with a reasonably solid method for decoding the input this morning, but settled on a different sequence of steps after I started coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  My solution
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;First, I separated and alphabetically sorted each input and output string, and sorted the input by string length.&lt;/li&gt;
&lt;li&gt;Next, I started comparing some of the strings to find common and unique letters, which corresponded to the common and unique segments of certain numbers on the display. For example, after figuring out which six-letter string represented '9', I determined which letter was missing from that string. That missing letter corresponded to the bottom left segment, which I arbitrarily designated s5.&lt;/li&gt;
&lt;li&gt;By using the strings representing '0', '1', '4', '6', '7', and '9', I was able to definitively match a letter to each segment. After that, I used those segment characters to build a dictionary for decoding the output:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;decoder_dict = {'0': 'abdefg', '1': 'be', ...}&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A quick loop through each of the four strings of the output and a type conversion later, and I had a list of integers for each of the outputs, and quickly summed them all up to solve the puzzle.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Take-aways
&lt;/h2&gt;

&lt;p&gt;I think what I did well in today's challenge was to break a seemingly big problem down into manageable chunks. While I bet there are faster ways to arrive at the solution than the one I took, I wanted to make sure my logic was sound after each step was coded. I also spent time diagramming and checking my logic on paper both before and during the coding process to make sure I stayed on the right track.&lt;/p&gt;

&lt;p&gt;This approach, along with plenty of print statements for debugging, helped me squash the bugs I encountered along the way. I didn't run into any major roadblocks, and it took me about 6 hours to solve from reading the problem to entering the correct solution. I'm pleased that I was able to finish and earn my second gold star for the day!&lt;/p&gt;

&lt;p&gt;See my solution code &lt;a href="https://github.com/aeversme/adventofcode2021/tree/main/dec08"&gt;in my GitHub repository&lt;/a&gt;. 🔥&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So far: 8 days, 16 gold stars (2 stars possible each day)!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>adventofcode</category>
      <category>python</category>
    </item>
    <item>
      <title>State of my Cloud Journey: Nov 30, 2021</title>
      <dc:creator>Alex Eversmeyer</dc:creator>
      <pubDate>Wed, 01 Dec 2021 05:16:56 +0000</pubDate>
      <link>https://dev.to/alexeversmeyer/state-of-my-cloud-journey-nov-30-2021-43cj</link>
      <guid>https://dev.to/alexeversmeyer/state-of-my-cloud-journey-nov-30-2021-43cj</guid>
      <description>&lt;p&gt;The Thanksgiving break here in the U.S. meant I had a few extra days off from work (in addition to the sick day I took as I recovered from the COVID booster and flu shot). I took advantage of this time to make more progress along my learning paths, and to review some suggestions I've received about how to improve my job prospects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning Paths 📚
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;CS50x&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Algorithms! Seems like a big scary word, but they're just methods for solving problems. This week, the course covered search (linear and binary) and sort (selection, bubble, merge) algorithms, as well as covering recursion. Thankfully, I'd already seen some of this in the Stanford course I started last month, so this felt fairly familiar. I appreciated the different take on explaining big-O and Omega notation.&lt;/p&gt;

&lt;p&gt;The second half of this week's problem set gave me fits, as I had to implement a simplified &lt;a href="https://en.wikipedia.org/wiki/Tideman_alternative_method"&gt;Tideman-style&lt;/a&gt; election using the course's boilerplate code as a base. I chose to implement a bubble sort where a sorting method was called for, since the possible data set to be sorted maxed out at 36 items; had the data set been larger, I would have preferred using a merge sort for its logarithmic run time.&lt;/p&gt;

&lt;p&gt;The "locking in" of candidate pairs also required some recursion to avoid closing any loops of candidates (for example: A beats B, B beats C, C beats A). This last step took quite a lot of thinking and a pair programming session to figure out, partly because I hadn't thought to use recursion at that point. In retrospect, the solution seems obvious.&lt;/p&gt;

&lt;p&gt;For the curious, this problem's instructions are &lt;a href="https://cs50.harvard.edu/x/2021/psets/3/tideman/"&gt;on the CS50x website&lt;/a&gt; and my solution code (with a few extraneous notes and a number of debugging printf statements) is &lt;a href="https://github.com/aeversme/cs50x/tree/main/week3/pset3/tideman"&gt;in my CS50x GitHub repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;DevOps in the Cloud&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With Ansible working, the course moved on to using Jenkins. After installing Jenkins on the Cloud9 instance (upon which everything has been developed) using Ansible, I got to see how to set up Jenkins manually. Given the integration with GitHub and Terraform Cloud, there were quite a few steps involved to get all the permissions ironed out! In the end, though, I had a Jenkins project configured to pull source files from a GitHub repository, run a &lt;code&gt;terraform apply&lt;/code&gt; and then execute an Ansible playbook. Next step: automating the pipeline!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GitHub Learning Labs&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I also spent part of my weekend going through several of GitHub's own tutorial labs. In particular, I was interested in learning more about pull requests, branching, and merging. This will hopefully come in handy as I revisit my ETL project in the month of December with the aim of cleaning things up and improving it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Job Search 🔎
&lt;/h2&gt;

&lt;p&gt;This was not a week of big progress on the job search front. For now, I think I may give up on 'resume roulette,' as it doesn't seem to be getting me anywhere (even with entry-level jobs that seem like a perfect fit for my knowledge and interests). It has been disheartening, to say the least, to only be rejected or ghosted. A new approach is clearly needed.&lt;/p&gt;

&lt;p&gt;I am still thinking about what that approach might entail beyond some resume cleanup and revisiting my 'best' project for some tweaking... 🤔&lt;/p&gt;

&lt;p&gt;&lt;em&gt;0 applications; 0 interviews; 0 rejections&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up 🍩
&lt;/h2&gt;

&lt;p&gt;I am pleased with some of my progress, and I hope to capitalize on that momentum to bolster my efforts in other facets of this career change adventure.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Onward and upward!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>learning</category>
      <category>jobsearch</category>
    </item>
  </channel>
</rss>
