<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jay Viloria</title>
    <description>The latest articles on DEV Community by Jay Viloria (@jviloria96744).</description>
    <link>https://dev.to/jviloria96744</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jviloria96744"/>
    <language>en</language>
    <item>
      <title>Event-Driven Python ETL: ACloudGuru September 2020 Challenge</title>
      <dc:creator>Jay Viloria</dc:creator>
      <pubDate>Thu, 01 Oct 2020 20:32:37 +0000</pubDate>
      <link>https://dev.to/jviloria96744/event-driven-python-etl-acloudguru-september-2020-challenge-ofn</link>
      <guid>https://dev.to/jviloria96744/event-driven-python-etl-acloudguru-september-2020-challenge-ofn</guid>
      <description>&lt;p&gt;I just completed the ACloudGuru September 2020 challenge but in reality I feel like I have only reached one ending of a Choose-Your-Own-Adventure game.&lt;/p&gt;

&lt;p&gt;Basically, the challenge is to create an automated ETL process (ran once daily) that takes two COVID-19 data sources, merge and clean them, apply some transformations and save the result to a database of our choosing and send notifications about the results of the process.  A dashboard was then required that used the post-ETL data as a source.  Here is the official challenge &lt;a href="https://acloudguru.com/blog/engineering/cloudguruchallenge-python-aws-etl" rel="noopener noreferrer"&gt;announcement&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;Although there were many individual steps listed in the instructions, my approach can be broken into the following parts which I'll expand on in later sections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Analysis: Time spent exploring data sources, considering different solution routes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CI/CD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ETL Construction: Developing the Extract/Transform/Load logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DB Choice: Choosing the right DB and how I went with a csv file and S3 as my "database"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Notifications: SNS Notification set-up...and then tear-down...and then re-set-up...&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ETL/Unit/Smoke Testing: Manually testing process, writing unit tests and trying to do a proper smoke test&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data Consumption/Dashboard&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Analysis
&lt;/h2&gt;

&lt;p&gt;I spent a fair bit of time before doing any "work" just thinking about how I wanted to approach the challenge.  I read the instructions a few times and took a look at the data.  I'll list some of my initial thoughts/observations and later relate them to future choices,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After looking over the QuickSight and Tableau Public sites, I made the decision to do the frontend/dashboard myself.  I wasn't sure at that time if I was going to use a framework or simply vanilla JS.  I was leaning towards React though.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As far as CI/CD of resources, I have been doing a lot with GitHub Actions so I decided to continue using that, instead of the Code* line of products that AWS offers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After glancing at the initial data, I knew I wanted to add fields, specifically, the incremental data.  The existing fields were all cumulative cases/deaths/recoveries. I also noticed that at times, older data had their records updated (adjustments to cases/deaths/recoveries)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My initial impressions were that I wanted a relational DB, although cost/complexity was a big concern. This was one of the considerations that made the analysis step take longer than expected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Forrest listed some extra challenging steps, one of them being a smoke test that ran the process with dummy data and verified the notification message.  I knew I wanted to include this in my solution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CI/CD
&lt;/h2&gt;

&lt;p&gt;After deciding on using GitHub Actions for my workflow, I split my project into two repositories: back-end/ETL/Data API and front-end/dashboard.&lt;/p&gt;

&lt;h4&gt;
  
  
  ETL/API
&lt;/h4&gt;

&lt;p&gt;I used SAM to handle the back-end.  Because I decided to do the front-end myself, I didn't have native integrations into a dashboarding software/service so I knew I needed an API for the data to be consumed.  I considered splitting the API into a separate SAM project but decided against it.  My rationale being that for a hypothetical ETL change, for example, adding a data source, the API would probably be changed as well to make the new data available to the user.&lt;/p&gt;

&lt;h4&gt;
  
  
  Dashboard
&lt;/h4&gt;

&lt;p&gt;By this time, I decided to go with React for the dashboard so I chose CDK to structure my project/provision my resources.  This was a choice of convenience because I had just completed a template for a React project that is deployed using CDK. You can find the template repo &lt;a href="https://github.com/jviloria96744/react-aws-cdk-template" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  ETL Construction
&lt;/h2&gt;

&lt;p&gt;This was a fairly straight-forward portion of the project. I split the Extract/Transform/Load steps into separate &lt;code&gt;.py&lt;/code&gt; files and had one lambda handler control everything.  I used one lambda for the core ETL process, but I considered an alternative approach using multiple de-coupled parts for the Extract/Transform/Load steps.  In the end, I stuck with the one lambda approach mostly out of simplicity but I'll briefly describe an alternative.&lt;/p&gt;

&lt;h4&gt;
  
  
  Alternative Approach
&lt;/h4&gt;

&lt;p&gt;I considered having an Extract lambda save the downloaded data to a "raw data" S3 Bucket, that triggered the Transform/Load lambdas.  The Transform lambda would have needed logic to ensure that all data sources were extracted before doing its thing.  I considered this approach when I thought about scalability of one ETL Lambda.  I thought about the case where the data sets were larger or data sources more numerous.  In that case, the Extract step might be much longer and the penalty for a failure in the Transform/Load steps much more severe (restarting downloads).&lt;/p&gt;

&lt;p&gt;By that same logic, I considered a separate Transform lambda/process.  This is because as the data gets bigger, the Transform process could be beyond the Lambda capabilities (15 minute run time and memory constraints) and would possibly need a Fargate/EC2 process to complete.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Glue
&lt;/h4&gt;

&lt;p&gt;I didn't have experience with AWS Glue but it seemed like a viable option, especially with my "database" choice described later. Part of me wants to "choose another path" and re-do an alternate version of this project using Glue instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  DB Choice
&lt;/h2&gt;

&lt;p&gt;I spent a lot of time going over this choice. My knee-jerk reaction when first looking at the data was a relational DB.  This feeling was strengthened by my desire to add fields as mentioned in the Analysis section.  &lt;/p&gt;

&lt;p&gt;Unfortunately, I looked into RDS and it was expensive as well as being more complex to integrate with AWS Lambda.  The main blocker was the cost though, to incur the costs associated with standing up a RDS instance for 200-300 records seemed inappropriate.  I briefly looked into Aurora Serverless but I wasn't sure how scaling down to/scaling up from 0 ACU worked.  This was a concern because my data was to be exposed as an API Gateway/Lambda for consumption so I wasn't sure how it would affect latency.&lt;/p&gt;

&lt;p&gt;My other choices were DynamoDB and the alternative that I ended up choosing, a csv file stored in an S3 Bucket.  I'll describe what went into my choice below,&lt;/p&gt;

&lt;h4&gt;
  
  
  Daily Data Diff
&lt;/h4&gt;

&lt;p&gt;I noticed during my initial analysis that when the data was updated, at times, older records were updated/adjusted.  The DataFrame object has nice methods tools to find the global differences between two objects. &lt;code&gt;pandas&lt;/code&gt; allows for a csv file to be converted to a DataFrame as one operation.  With DynamoDB, the comparisons have to be done at the record/item level or I suppose the result of a scan could be converted to a DataFrame. Either way, it seemed as if more operations were necessary.&lt;/p&gt;

&lt;h4&gt;
  
  
  Data Consumption Flexibility
&lt;/h4&gt;

&lt;p&gt;My initial leaning towards a relational DB was because of the ability to query the data flexibly.  In terms of scalability, I considered the case where this project was extended to include data at the state or county level or many more fields were added.  In that case, when consuming the data, I believe it should be filtered on the server(less) side first. Also flexible queries in DynamoDB require setting up Secondary Indexes so I didn't like it as a choice. &lt;/p&gt;

&lt;p&gt;I felt that a csv file stored in S3, combined with a Lambda using the S3 Select functionality gave me the best options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the case of new fields, there is a clear way to add new query flexibility in terms of query string parameters in the API endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the amount of data scaled up allowing for different segmentations (country, state/province, county) and new consumers of this data, the data consumed by existing users would be relatively predictable because with S3 Select, you pay for the data after filtering takes place.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the amount of data scaled up enough to facilitate a migration to an RDS instance, the S3 Select logic should be mostly re-usable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Notifications
&lt;/h2&gt;

&lt;p&gt;The DB choice was the most difficult choice I made in this project, but the choice on how to implement ETL notifications was by far the most frustrating choice.&lt;/p&gt;

&lt;p&gt;At first it seemed so simple, Lambda has a Destinations feature that allows automatic integration with a few different AWS services, one of them being SNS, for asynchronous Lambda invocations. I thought it was perfect, I'll just hook that up and I get automatic success/failure publishing to my subscribers (my email address).  Then I saw the notification...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frew3hap8rzdtj5et17q6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frew3hap8rzdtj5et17q6.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I looked into the Destinations documentation and the above was what I was stuck with (Note, some content was blocked out from the screenshot above). I understand that Destinations are probably meant more for machine triggers of services so sending over a JSON-structured object makes more sense. For a human-readable notification though, it was...less than ideal. For a while, I thought, well technically, it meets the requirements of the challenge, but it looks terrible and it makes the smoke test much more difficult... But it seemed so easy to just leave that Destinations configuration in my SAM template, oh well, time to move on.&lt;/p&gt;

&lt;p&gt;I broke up the ETL Lambda-SNS connection and instead configured the Destination of my ETL Process to be another Lambda function responsible for publishing to SNS. This accomplished two main things,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better Structured/Formatted Notifications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvcpo4x9sbtugxo8nx6ri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvcpo4x9sbtugxo8nx6ri.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filtered Notifications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F86mbxbk1d4eg26wc5rwk.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F86mbxbk1d4eg26wc5rwk.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was able to use the SNS message attributes to filter messages to subscribers.  I used this to create a subscriber that only received messages during my smoke test run using an environment parameter to denote testing vs production runs.&lt;/p&gt;

&lt;p&gt;I used an SQS Queue as the target of my testing notifications because it allowed for programmatic verification of received messages and was a direct source of the message. What I mean by that is I considered using another lambda for the test messages and writing the message content to S3/DynamoDB to verify the SNS notifications were functioning. In that sense though, I am also testing the Lambda -&amp;gt; {S3, DynamoDB} connection which I felt muddied the waters of the test, so to speak. This is one area where I would like to learn what is standard practice because my solution felt very hacky.&lt;/p&gt;

&lt;h2&gt;
  
  
  ETL/Unit/Smoke Testing
&lt;/h2&gt;

&lt;p&gt;By this time I had three Lambda functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Core ETL Function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SNS Publishing Function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;API Gateway Proxy Function&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of my unit tests were focused on the ETL function, I tested the Extract/Transform/Load modules separately and tried to cover different data problems that might come up.&lt;/p&gt;

&lt;p&gt;One unfortunate consequence of using S3/S3 Select as my database/data consumption mechanism is that &lt;code&gt;moto&lt;/code&gt;, the standard mocking library for AWS services, does not yet have support for it so I couldn't include a proper unit test for my API Gateway Proxy function.&lt;/p&gt;

&lt;h4&gt;
  
  
  Smoke Test
&lt;/h4&gt;

&lt;p&gt;Based on all the machinery I set up around the smoke test, e.g. SNS Message Attributes, SQS Queue, SNS Lambda Publisher. The actual smoke test script was very straight forward. It invokes the ETL Lambda with a &lt;code&gt;testing&lt;/code&gt; environment value. It then polls an SQS Queue to determine if the correct SNS message was sent, then cleans up any test ETL files created in the "database". In this case, I used S3 prefixes to separate production vs testing data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Consumption/Dashboard
&lt;/h2&gt;

&lt;p&gt;For this part of the project, I put together a quick React app and used the &lt;a href="https://recharts.org/en-US" rel="noopener noreferrer"&gt;Recharts&lt;/a&gt; library to create a graph out of the data returned by the API call I set up above.&lt;/p&gt;

&lt;p&gt;I included a few controls to toggle how the data is aggregated or whether or not the user is viewing Cases/Deaths/Recoveries. Although initially I wanted to add filters for some of the fields I added during the Transform process, I left them as possible future enhancements, translation, I got a little tired and wanted to wrap up this project. A screenshot of the dashboard is provided&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0gev9n3q0arnkz6ax6eb.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0gev9n3q0arnkz6ax6eb.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-Time Dashboard Updates
&lt;/h2&gt;

&lt;p&gt;I didn't implement this in my project but it was listed as an extra challenging step in the directions so I thought I would describe how I might approach that.&lt;/p&gt;

&lt;p&gt;Since both data sources were in public repos, I think I would create a function that hits the GitHub API endpoint for both sources with some level of frequency and does a diff comparison of those specific files. If there are changes to those files, trigger the ETL event.&lt;/p&gt;

&lt;p&gt;This wouldn't be technically be real-time but with a frequent enough period of the GitHub API endpoint, it could be near it. It would also avoid downloading the file(s) unless necessary. Although I'm not sure if there are quota restrictions on the GitHub API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I very much enjoyed this challenge. I still feel like there were a bunch of different ways of accomplishing certain steps and I am very tempted to do multiple iterations of everything.&lt;/p&gt;

&lt;p&gt;Feel free to check out my &lt;a href="https://acg-covid-challenge.jviloria.com/" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt; or the two repos that make up the project: &lt;a href="https://github.com/jviloria96744/acg-covid-challenge-frontend" rel="noopener noreferrer"&gt;front-end&lt;/a&gt; and &lt;a href="https://github.com/jviloria96744/acg-covid-challenge-backend" rel="noopener noreferrer"&gt;back-end&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The overall back-end architecture is provided below,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6fwxuckwst7nc1cwxxn4.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6fwxuckwst7nc1cwxxn4.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Overall, if this is what we can expect from the ACloudGuru Monthly Challenges, I am very much looking forward to them.&lt;/p&gt;

&lt;p&gt;P.S. I apologize for the length of this post, I just sat down and it all poured out.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>python</category>
    </item>
    <item>
      <title>Deploy React Site Using GitHub Actions &amp; AWS CDK</title>
      <dc:creator>Jay Viloria</dc:creator>
      <pubDate>Tue, 15 Sep 2020 19:19:37 +0000</pubDate>
      <link>https://dev.to/jviloria96744/deploy-react-site-using-github-actions-aws-cdk-5cbf</link>
      <guid>https://dev.to/jviloria96744/deploy-react-site-using-github-actions-aws-cdk-5cbf</guid>
      <description>&lt;h3&gt;
  
  
  My Workflow
&lt;/h3&gt;

&lt;p&gt;This is a React template that uses a GitHub Actions Workflow to deploy static files to AWS in multiple development environments with a custom domain and HTTPS.  The GH Actions Workflow leverages AWS CDK to create resources/provision services necessary for deployment.&lt;/p&gt;

&lt;p&gt;The workflow assumes a Release Flow-like branching strategy that allows for the following development workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;automated testing for "feature/*" branch pushes&lt;/li&gt;
&lt;li&gt;automated testing/dev deployments on pushes and pull requests into master&lt;/li&gt;
&lt;li&gt;automated testing/stg deployments/artifact storage upon tag pushes&lt;/li&gt;
&lt;li&gt;manual workflow for prod deployments where tagged version is specified and deployed from artifact storage&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;DIY Deployments&lt;/p&gt;

&lt;h3&gt;
  
  
  Yaml File or Link to Code
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/jviloria96744" rel="noopener noreferrer"&gt;
        jviloria96744
      &lt;/a&gt; / &lt;a href="https://github.com/jviloria96744/react-aws-cdk-template" rel="noopener noreferrer"&gt;
        react-aws-cdk-template
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      React JS template using AWS Cloud Development Kit (CDK) to deploy static site to AWS with a custom domain name.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="MD"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;React/AWS CDK Template&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;This project is a template used to deploy a static React site to an Amazon S3 Bucket.&lt;/p&gt;
&lt;p&gt;The GitHub Actions Workflow is responsible for the CI/CD portion of this project and leverages the AWS Cloud Development Kit to create the S3 Bucket, CloudFront Distribution, SSL Certificate and associated permissions/access policies and will deploy to multiple sub-domains given a custom domain that the user owns.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Get Started&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Clone/Fork Project&lt;/h3&gt;
&lt;/div&gt;
&lt;p&gt;Within your own GitHub account, create a repo using a cloned local repository or by forking this repository. Clone the repository using the following command,&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git clone https://github.com/jviloria96744/react-aws-cdk-template.git&lt;/code&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;GitHub Secrets/AWS Prerequisites&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;The GitHub Actions Workflow requires four secrets that are all associated with prerequisite activities that need to be completed in AWS &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/" rel="nofollow noopener noreferrer"&gt;(AWS Account Set Up Tutorial)&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; / &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;AWS_DOMAIN_NAME&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;AWS_REGION&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Access Key ID and Secret Access Key are the credentials of an AWS 'User' that…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/jviloria96744/react-aws-cdk-template" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>actionshackathon</category>
    </item>
    <item>
      <title>Cloud Resume Challenge: My Experience &amp; My New Addiction to Green Check Marks</title>
      <dc:creator>Jay Viloria</dc:creator>
      <pubDate>Thu, 23 Jul 2020 04:20:43 +0000</pubDate>
      <link>https://dev.to/jviloria96744/cloud-resume-challenge-my-experience-my-new-addiction-to-green-check-marks-5h7o</link>
      <guid>https://dev.to/jviloria96744/cloud-resume-challenge-my-experience-my-new-addiction-to-green-check-marks-5h7o</guid>
      <description>&lt;p&gt;It was the latter half of May that my journey down the AWS rabbit hole began. I had just left my job as a Econ/Data Analyst turned Software Engineer doing web development and wanted to commit myself to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learning desirable skills for my market&lt;/li&gt;
&lt;li&gt;Getting to some side projects that had been sitting collecting dust&lt;/li&gt;
&lt;li&gt;Getting &lt;em&gt;hungry&lt;/em&gt; again in terms of &lt;em&gt;wanting&lt;/em&gt; to do development and build things&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS seemed to fit nicely for covering all three points. Over the next 6 weeks, I focused on two parallel efforts: learning AWS by way of certifications (Cloud Practitioner and Solutions Architect-Associate) and making ground on my side projects using any AWS services I could take advantage of. Over that time, while trying to look for resources to learn, I came across the Cloud Resume Challenge a few times but didn't think to attempt the challenge. Once I attained the SA-Associate certification, I decided to revisit the Cloud Resume Challenge for the following reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I thought it was a good time to put together a portfolio and get back out on the market&lt;/li&gt;
&lt;li&gt;Because of the parallel nature of my approach, some of the hands-on AWS experience I had gained definitely didn't reflect best practices. I liked that the Cloud Resume Challenge required use of SAM which I had been wanting an excuse to learn.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://cloudresumechallenge.dev/instructions/"&gt;Challenge&lt;/a&gt; is conceptually straight-forward; create a portfolio/resume website, hosting it through AWS with a back-end that keeps track of visitors to the page, all using proper Source Control/CICD Practices. I'll describe my experience roughly in the order in which I did things.&lt;/p&gt;

&lt;h1&gt;
  
  
  Front End Resume Website
&lt;/h1&gt;

&lt;p&gt;Part of my reason for taking on the challenge was to put together a portfolio so I actually spent a good amount of time, relatively speaking, on this portion (although looking at the finished product, it may not seem like it, as a designer I am definitely not). What I most liked about this portion was the bare-bones nature of the website. I had web development experience but had definitely gotten used to relying on frameworks such as Material UI or Bootstrap. It was nice doing everything with only HTML and my own CSS.&lt;/p&gt;

&lt;h1&gt;
  
  
  Hosting/DNS/HTTPS
&lt;/h1&gt;

&lt;p&gt;As part of one of my side projects, I had gone through the AWS tutorials on S3 Static Site Hosting behind a CloudFront Distribution and Route 53 Domain Name so this part was relatively quick and straightforward. Most of it was just avoiding the stumbling blocks I ran into while doing this with my side projects, such as making sure the certificate is based in us-east-1 and getting all my ducks in a row with regards to redirecting non-root to root domains through CloudFront distributions and making sure everything was secure.&lt;/p&gt;

&lt;h1&gt;
  
  
  Back-End Visitor Count
&lt;/h1&gt;

&lt;p&gt;Most of my side projects/AWS dabbling was done using serverless pieces like Lambda and DynamoDB. To be honest, this was more due to cost than any other architectural considerations. Fortunately though, that experience helped make the Python/JavaScript work needed to get the back-end working not anything I hadn't seen before.&lt;/p&gt;

&lt;h1&gt;
  
  
  Infrastructure as Code &amp;amp; CI/CD
&lt;/h1&gt;

&lt;p&gt;This was the area where I felt I learned the most and the area that really hit that third bullet point in my initial list of goals for wanting to learn AWS. In my side projects, I admit (embarassingly) that there was definitely a lot of Lambda Function code editing going on in the console. In my mind I knew it wasn't best practices but I almost treated it like a rite of passage in a sense; some type of hazing process I needed to go through to properly see the benefits of CloudFormation and SAM. As soon as I got my first Stack built based on my commit through my GitHub Actions Workflow (which took much less effort than I was expecting), I thought to myself, "Why wasn't I doing &lt;em&gt;everything&lt;/em&gt; this way sooner?" I am now very excited to dive deep into CloudFormation/SAM and look into CDK which from what I have read, I think I could be very on-board with.  I can't wait to see more green check marks and circles.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;This project was a nice challenge that forced me to update my &lt;a href="https://jviloria.com/"&gt;resume/portfolio&lt;/a&gt; and gave me a proper push into the world of Infrastructure as Code that I don't think I'll be leaving any time soon.  A huge thank you to Forrest Brazeal for putting together this challenge.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>serverless</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
