<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael Austin</title>
    <description>The latest articles on DEV Community by Michael Austin (@miketypes).</description>
    <link>https://dev.to/miketypes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/miketypes"/>
    <language>en</language>
    <item>
      <title>Day 3 -&gt; Athena empties the quiver</title>
      <dc:creator>Michael Austin</dc:creator>
      <pubDate>Thu, 09 Jan 2025 18:57:51 +0000</pubDate>
      <link>https://dev.to/miketypes/day-3-athena-empties-the-quiver-3kc2</link>
      <guid>https://dev.to/miketypes/day-3-athena-empties-the-quiver-3kc2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Overview provided by&lt;/strong&gt;: &lt;a href="https://www.youtube.com/watch?v=RAkMac2QgjM" rel="noopener noreferrer"&gt;Alicia Ahl&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github Link: &lt;a href="https://github.com/MikeAA97/DevOpsAllStarsChallenge2025/tree/main/Day3" rel="noopener noreferrer"&gt;https://github.com/MikeAA97/DevOpsAllStarsChallenge2025/tree/main/Day3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At a high-level this day's project was based around building out an AWS Data Lake and understanding the components that make it up. I've never used AWS Glue or Athena resources before so this was a great introduction to those tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Resources Used:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Handles the logic for querying the API, transforming the data, and pushing it to S3 &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS S3&lt;/strong&gt;: Where both the API Data was stored, and results from Athena Queries mentioned later in the section&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Glue Database&lt;/strong&gt;: A container to hold similar AWS Glue Tables / segment them for different projects or datasets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Glue Table&lt;/strong&gt;: Defines the location of the data, as well as the schema and format, so that other tools such as Athena know how to interpret the data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Athena Workgroup&lt;/strong&gt;: Container for Queries, as well as shared-settings. In this case I wanted to specify the output location instead of using the default.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Athena Queries&lt;/strong&gt;: A few template queries to start interacting with the dataset.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Terraform Experience
&lt;/h3&gt;

&lt;p&gt;I initially struggled with managing Athena's query result output location because I was creating a new Athena database and table instead of just referencing my Glue Catalog. I realized that the database creation was aimed at manipulating the default/primary workgroup’s output location, which resulted in a dummy database. Instead of this, I opted to create a dedicated workgroup for my named_queries and specified a custom output location within the workgroup resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_athena_workgroup" "output" {

    name = "output" #I'm bad with naming things

    configuration {
        result_configuration {
            output_location = "s3://${aws_s3_bucket.day3.bucket}/athena-results"
        }
    }
}



resource "aws_athena_named_query" "test-query" {

    name      = "EXAMPLE-QUERY"
    workgroup = aws_athena_workgroup.nba_analytics_output.id
    database  = aws_glue_catalog_database.day3.name

    query = "SELECT * FROM ${aws_glue_catalog_database.day3.name}.${aws_glue_catalog_table.nba_players_table.name} limit 10;"

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gave me better control without needing a dummy database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;p&gt;Terraform versus Boto3 is going to be an interesting battle as the architecture gets more complex. I'm already seeing how I have to refactor certain api_calls to fit into a terraform-centric model. &lt;br&gt;
Oddly enough, I feel like seeing the resource references within the terraform resources helps me to envision how everything relates to one another better than looking at pure Python code. It forces me to give every resource scrutiny about it's place in the workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Last Thoughts
&lt;/h3&gt;

&lt;p&gt;I was able to re-use a lot of the logic from Days 1 and 2 to tackle this project, which saved a ton of time. I was able to re-direct that energy to understanding what I was trying to automate. &lt;br&gt;
I'll probably delay Day 4 to begin trying out integrations w/ Splunk for these previous projects, as well as work on my diagramming skills. Hopefully going forward the entire process will be more streamlined, and visually appealing.&lt;/p&gt;

</description>
      <category>devopsallstarschallenge</category>
    </item>
    <item>
      <title>First Post!</title>
      <dc:creator>Michael Austin</dc:creator>
      <pubDate>Wed, 08 Jan 2025 05:42:23 +0000</pubDate>
      <link>https://dev.to/miketypes/first-post-5bn4</link>
      <guid>https://dev.to/miketypes/first-post-5bn4</guid>
      <description>&lt;p&gt;This is my first tech-related post ever, and probably my first social-media-esque post in over 2 years!&lt;/p&gt;

&lt;p&gt;Some thoughts about Day 2 of the 30 Day DevOps Challenge I signed up for. (I haven't fully digested Day 1 yet, but will come around to that)&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 2 -&amp;gt; Finally getting emails from the NBA
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Code and Architecture provided by&lt;/strong&gt;: &lt;a href="https://www.youtube.com/watch?v=09WfkKc0x_Q&amp;amp;t=1430s" rel="noopener noreferrer"&gt;YouTube Video&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The goal of this project was to familiarize myself with AWS tooling and working with external APIs. I utilized the &lt;strong&gt;SportsData.io API&lt;/strong&gt; for NBA game information for the day. The only prerequisite for using this API is signing up for an account.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Resources Used:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Handles the logic for querying the API and transforming the data into a human-readable format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS SNS&lt;/strong&gt;: Sends the data to a specified email address.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS EventBridge&lt;/strong&gt;: Schedules the Lambda function invocation at set times.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Terraform Experience
&lt;/h3&gt;

&lt;p&gt;To challenge myself, I decided to deploy all the AWS resources using &lt;strong&gt;Terraform&lt;/strong&gt;. While I use Terraform regularly at work, I had never set it up on my personal device, so bootstrapping it was an interesting experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of getting Terraform set up:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Version 1.10.0&lt;/strong&gt;: I opted for this version because it supports native state-locking in S3 buckets, which was a key feature I wanted to try out (I still use version 0.13 at work).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Installing Terraform&lt;/strong&gt;: When I ran &lt;code&gt;brew install terraform&lt;/code&gt;, it only installed version 1.5, which didn’t meet my needs. To resolve this, I installed &lt;strong&gt;tfenv&lt;/strong&gt;, a tool that helps manage multiple Terraform versions. After installing &lt;code&gt;tfenv&lt;/code&gt;, I ran &lt;code&gt;tfenv install 1.10.0&lt;/code&gt; and &lt;code&gt;tfenv use 1.10.0&lt;/code&gt; and was good to go.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Focus Areas for the Terraform Module
&lt;/h3&gt;

&lt;p&gt;When converting the project from individual resources to a Terraform module, I focused on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clear and relevant resource naming&lt;/strong&gt; (I’m not great at naming things)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ensuring sensitive data was not committed to code&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Making the module generic and reusable&lt;/strong&gt; for others, with clear instructions on how to use it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Last Thoughts
&lt;/h3&gt;

&lt;p&gt;It's been great to use my personal device for a change. I've definitely taken for granted how well set-up my work environment is, so achieving some form of parity is a main focus during the beginning days of this challenge.&lt;/p&gt;

&lt;p&gt;When time permits, I’d like to further explore the NBA data, possibly injecting it into &lt;strong&gt;Splunk&lt;/strong&gt; to create visualizations.(And try not to incur any significant AWS fees in the process). It'd also be nice to try refactoring the logic into Golang to get more experience using in a different programming language in a practical-ish way.&lt;/p&gt;

&lt;p&gt;Excited for whats in store for Day 3!&lt;/p&gt;

</description>
      <category>devopsallstarschallenge</category>
    </item>
  </channel>
</rss>
