<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: bradenrichardson</title>
    <description>The latest articles on DEV Community by bradenrichardson (@bradenrichardson).</description>
    <link>https://dev.to/bradenrichardson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bradenrichardson"/>
    <language>en</language>
    <item>
      <title>Creating an Automated Personal Finances Dashboard with AWS (Part 6) - CDK</title>
      <dc:creator>bradenrichardson</dc:creator>
      <pubDate>Sat, 08 Jan 2022 23:53:41 +0000</pubDate>
      <link>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-6-cdk-4eip</link>
      <guid>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-6-cdk-4eip</guid>
      <description>&lt;p&gt;The final installment of this series details wrapping our AWS resources in some CDK (Cloud Development Kit) magic.&lt;/p&gt;

&lt;p&gt;*Disclaimer: The scope of CDK does not include the Athena Workgroup or Athena DynamoDB connector - this will be addressed at a later point&lt;/p&gt;

&lt;p&gt;I'll be honest, before this blog series I had only used Cloudformation with YAML files, and while I knew about CDK, I didn't know just how good it would be.&lt;br&gt;
Developing in Cloudformation can be cumbersome, and it can impede the creative process that is so necessary when sandboxing, that's why I had it scheduled at the end: After all the fun stuff is done we go through and define our resources, making sure that it all deploys correctly.&lt;/p&gt;

&lt;p&gt;During this segment I realised something: I should have been using CDK to develop from the start. It actually reduces development time and complexity by implementing best practice architecture by default, and filling in the gaps for you. Let's have a look at the differences between declarative Cloudformation and CDK.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CDK is object based, Cloudformation is declarative&lt;/li&gt;
&lt;li&gt;Cloudformation needs 100% of the information, CDK will fill in the gaps (with best practice architecture)&lt;/li&gt;
&lt;li&gt;Cloudformation requires a deployment mechanism built around it for automated workloads, CDK has it's own CLI and tooling&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Process
&lt;/h3&gt;

&lt;p&gt;To turn our deployment into a CDK workload I did the following:&lt;/p&gt;

&lt;p&gt;Created a CDK project following this resource: &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Created an object per resource required&lt;/li&gt;
&lt;li&gt;Assigned privileges to the resources&lt;/li&gt;
&lt;li&gt;Deployed! &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It was that easy, CDK handles unique bucket names, IAM roles and policies, a whole bunch of stuff that is normally tedious and boring to develop.&lt;/p&gt;

&lt;p&gt;Here's the code block as it stands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class CdkStack(cdk.Stack):
    def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs) -&amp;gt; None:
        super().__init__(scope, construct_id, **kwargs)

        dynamodb = _dynamo.Table(self, 'cdk_test',
            partition_key=_dynamo.Attribute(name='TransactionID', type=_dynamo.AttributeType.STRING) 
        )

        requests_layer = _lambda.LayerVersion(self, 'requests_layer',
            code=_lambda.Code.from_asset('layers\_requests'),
            compatible_runtimes=[_lambda.Runtime.PYTHON_3_9],
            compatible_architectures=[_lambda.Architecture.X86_64],
            description='A layer to send API requests'
        )

        process_webhook = _lambda.Function(self, 'process_webhook', 
            runtime=_lambda.Runtime.PYTHON_3_9, 
            handler='lambda_function.handler',
            code= _lambda.Code.from_asset('lambdas\process_webhook'),
            layers=[requests_layer]
            )

        provision_user = _lambda.Function(self, "provision_user",
            runtime=_lambda.Runtime.PYTHON_3_9,
            handler='lambda_function.handler',
            code= _lambda.Code.from_asset('lambdas\provision_user'),
            layers=[requests_layer]
        )

        dynamodb.grant_read_write_data(process_webhook)

        dynamodb.grant_read_write_data(provision_user)

        process_webhook_integration = HttpLambdaIntegration("Process Webhook Integration", process_webhook)

        api = aws_apigatewayv2.HttpApi(self, "HttpApi")

        api.add_routes(
            path="/webhook",
            methods=[aws_apigatewayv2.HttpMethod.POST],
            integration=process_webhook_integration
        )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;p&gt;Here's a screenshot of the CDK stack deploying succesfully:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6m1cb9itui72nvx5g09.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6m1cb9itui72nvx5g09.jpeg" alt="CDK Deploy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here's the current state of what we've developed so far:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F766zc65ors3n03g6dj20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F766zc65ors3n03g6dj20.png" alt="v0.1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That brings a wrap to this blog series, I hope that you learnt a few things about AWS and how you can apply it the problems you encounter in your own life, I certainly learnt a lot along the way!&lt;/p&gt;

&lt;p&gt;I'll be continuing development of this project in &lt;a href="https://github.com/bradenrichardson/diy_dashboard" rel="noopener noreferrer"&gt;DIY Dashboard&lt;/a&gt;, an open source repo designed to help you quickly stand up serverless infrastructure to create your own automated dashboards. There are many planned features and a few refactors on the cards as well, I need to sort out that pesky provision_new_user lambda!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cdk</category>
      <category>lambda</category>
      <category>quicksight</category>
    </item>
    <item>
      <title>Creating an Automated Personal Finances Dashboard with AWS - Part 5 (Quicksight)</title>
      <dc:creator>bradenrichardson</dc:creator>
      <pubDate>Tue, 04 Jan 2022 07:04:02 +0000</pubDate>
      <link>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-5-quicksight-18mb</link>
      <guid>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-5-quicksight-18mb</guid>
      <description>&lt;p&gt;G'day everyone, in the last post DynamoDB was configured and was receiving realtime transaction history.&lt;/p&gt;

&lt;p&gt;In this post we get down to the really good stuff, creating a personal finances dashboard!&lt;/p&gt;

&lt;p&gt;Quick side note: The purpose of this guide is technical rather than financial, so the underlying financial data has been scrambled.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisite: Getting DynamoDB data into Quicksight
&lt;/h3&gt;

&lt;p&gt;For this section I will defer to a fantastic article from another dev.to poster: &lt;br&gt;
&lt;a href="https://dev.to/aws-builders/finally-dynamodb-support-in-aws-quicksight-sort-of-2lbl"&gt;Using Athena data connectors to visualize DynamoDB data with AWS QuickSight&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Quicksight Configuration
&lt;/h3&gt;

&lt;p&gt;Alright so now that we've got our data from DynamoDB accessible by Quicksight, it's time to create a dataset.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new data set from an Athena data source&lt;/li&gt;
&lt;li&gt;Select the catalog that you created in the Athena guide&lt;/li&gt;
&lt;li&gt;Select the table that we created in our DynamoDB guide&lt;/li&gt;
&lt;li&gt;Hit visualize&lt;/li&gt;
&lt;li&gt;Make sure to schedule a refresh of the data, this should align with your use case&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fef1xayv470cpjf2qqs99.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fef1xayv470cpjf2qqs99.PNG" alt="Schedule data refresh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So now we have a default analysis created from a dataset, the fun begins.&lt;/p&gt;

&lt;h3&gt;
  
  
  Review Use Cases
&lt;/h3&gt;

&lt;p&gt;It's a good time to review our use cases for the dashboard, these will drive the visualisations that we create.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To deep dive historical data and identify correlations &lt;/li&gt;
&lt;li&gt;To gain insight into current financial state&lt;/li&gt;
&lt;li&gt;To forecast future financial state and identify trends&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At a first glance I think the first two are achievable in a simplified state given our current technical capabilities, the forecasting aspect will need to be addressed in a future revision. Let's break down those first two with some ideas on how we will achieve them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Use Case: Historical data
&lt;/h4&gt;

&lt;p&gt;Create visualisations &lt;br&gt;
    - Date, value, description, category&lt;br&gt;
    - 12 month and 1 week view&lt;/p&gt;

&lt;h4&gt;
  
  
  Use Case: Current financial state
&lt;/h4&gt;

&lt;p&gt;Create weekly report&lt;br&gt;
    - Week in review dashboard&lt;br&gt;
Create alert&lt;br&gt;
    - Daily spending threshold&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Visualisations
&lt;/h3&gt;

&lt;p&gt;There is an endless amount of possibilities when it comes to visualising data, but only a few that are useful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pie chart

&lt;ul&gt;
&lt;li&gt;Break down spending by category&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Line chart

&lt;ul&gt;
&lt;li&gt;Break down spending by month&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Table

&lt;ul&gt;
&lt;li&gt;List transactions, ordered by date&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;KPI

&lt;ul&gt;
&lt;li&gt;Track income/expenditure in key areas&lt;/li&gt;
&lt;li&gt;Conditional formatting indicates 'Good', 'Bad' and 'Warning' levels of spending&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  12 Month Dashboard
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqptjq9pc3edq1leh1lz9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqptjq9pc3edq1leh1lz9.PNG" alt="12 Month Dashboard"&gt;&lt;/a&gt;&lt;br&gt;
In this image we can see a simple dashboard that I've created, I'll create another one for the 1 week view that we will use for the weekly report. Duplicating sheets and visualisations in Quicksight is quick and intuitive - you'll find yourself using it frequently as it also copies across the filters.&lt;/p&gt;

&lt;h4&gt;
  
  
  1 Week Dashboard
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne5acy96iwah4mmsdgx4.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne5acy96iwah4mmsdgx4.PNG" alt="1 Week Dashboard"&gt;&lt;/a&gt;&lt;br&gt;
Not pictured - I included a daily KPI for spending that we will use for our spending alert.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating reports and alerts
&lt;/h3&gt;

&lt;p&gt;Now that we've created a baseline 12 month and 1 week dashboard, we can publish the analysis. This allows us greater functionality in the alerting and reporting space. &lt;/p&gt;

&lt;h4&gt;
  
  
  Reports
&lt;/h4&gt;

&lt;p&gt;To create a recurring report, navigate to the dashboard you published and select 'Share' &amp;gt; 'Email Report'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzc52pguc3lpksr1pdr3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzc52pguc3lpksr1pdr3.PNG" alt="Report Step 1"&gt;&lt;/a&gt;&lt;br&gt;
I've chosen to send a report weekly at 9am Monday morning&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjdaj8s36on7xcnese1o.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjdaj8s36on7xcnese1o.PNG" alt="Report Step 2"&gt;&lt;/a&gt;&lt;br&gt;
Formatted for desktop and selecting the weekly sheet&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbzg749g9h2l9k97gmqw.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbzg749g9h2l9k97gmqw.PNG" alt="Report Step 3"&gt;&lt;/a&gt;&lt;br&gt;
Select the email recipients&lt;/p&gt;

&lt;h4&gt;
  
  
  Alerts
&lt;/h4&gt;

&lt;p&gt;To create an alert you need to have a KPI visualisation, the rest is simple.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kkqz8j3w5kldibqejz2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kkqz8j3w5kldibqejz2.PNG" alt="Create Alert"&gt;&lt;/a&gt;&lt;br&gt;
I've created an alert that emails me when my spending goes over $70/day in this example&lt;/p&gt;

&lt;p&gt;At this point you will have:&lt;br&gt;
    - Athena data source configured&lt;br&gt;
    - Quicksight configured&lt;br&gt;
    - Analysis and visualisations created&lt;br&gt;
    - Reporting and alerts configured&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1xm85nwi855nig066oo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1xm85nwi855nig066oo.png" alt="Updated Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome, the dashboard works - the next step is to wrap everything in CDK to make sure that we can continue developing the platform in an efficient way.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>quicksight</category>
      <category>athena</category>
      <category>dashboard</category>
    </item>
    <item>
      <title>Creating An Automated Personal Finances Dashboard - Part 4 (DynamoDB)</title>
      <dc:creator>bradenrichardson</dc:creator>
      <pubDate>Mon, 20 Dec 2021 07:57:38 +0000</pubDate>
      <link>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-part-4-dynamodb-1emc</link>
      <guid>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-part-4-dynamodb-1emc</guid>
      <description>&lt;p&gt;I wrote some BAD code and redesigned on the fly...&lt;/p&gt;

&lt;p&gt;In the last post we had written the code to process a webhook event and set up the API infrastructure to support that.&lt;br&gt;
This part was supposed to cover writing to a CSV file in an S3 bucket - well that has definitely changed. I have redesigned to support DynamoDB and honestly, it was what I should have done from the start.&lt;/p&gt;

&lt;p&gt;Here's what the new system looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fcTy_B5o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7pclpliiyg9l0zghrbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fcTy_B5o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7pclpliiyg9l0zghrbx.png" alt="Before Diagram" width="880" height="262"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Why not CSV in S3?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It's very hard to append to CSV files in S3, the functionality with Boto3 just isn't there&lt;/li&gt;
&lt;li&gt;The solution would have required more engineering and maintenance&lt;/li&gt;
&lt;li&gt;Not as cool&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Why DynamoDB?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Fully managed by AWS&lt;/li&gt;
&lt;li&gt;Fantastic Boto3 support&lt;/li&gt;
&lt;li&gt;Will scale with the system, CSV in S3 would have only been practical for so long&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Setting up DynamoDB
&lt;/h4&gt;

&lt;p&gt;This was my first time using DynamoDB and it was really easy to set up, let's go through it:&lt;/p&gt;

&lt;p&gt;Create a table in DynamoDB - this is all very self explanatory, I'll be using the TransactionID as the partition key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1GbsBgmJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9iur1tsc5zecc7wquwu6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1GbsBgmJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9iur1tsc5zecc7wquwu6.PNG" alt="Create Table" width="880" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;... That's it - almost seems too easy right? There's no tricks here, this is what a fully managed NoSQL database looks like. It's just very easy.&lt;/p&gt;
&lt;h4&gt;
  
  
  Writing to DynamoDB
&lt;/h4&gt;

&lt;p&gt;This is where things get a little more interesting, although still quite simple. The objective is to modify both of our lambdas to write their respective payloads directly do DynamoDB.&lt;/p&gt;

&lt;p&gt;Before we can attempt to write to DynamoDB, we need to make sure we have the correct permissions, now someone will probably say "this is your bad code right here" and they would be correct. Blanket "allow all" policies are bad, and I should feel bad. There is a security review staged for the end of v0.1 where I'll go through and ensure all policies are least privileged compliant - right now I'm purely focused on making things work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Effect": "Allow",
"Action": [
    "dynamodb:*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have granted our lambdas supreme DynamoDB control we can write some code. The boto3 support for DynamoDB is great, it's really easy to use and super intuitive. One of my favourite things about DynamoDB is that you don't have to define a schema before hand, at the minimum you need a partition key, and then you can just add any other key value pair that you want. 'S' and 'N' denote string or number values.&lt;/p&gt;

&lt;p&gt;This is the function that the process webhook lambda uses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def write_to_dynamo(dictionary):
    dynamodb = boto3.client('dynamodb')
    dynamodb.put_item(TableName='quicksightTest', Item={'TransactionID':{'S': dictionary['ID']},'Category':{'S': dictionary['Category']}, 
    'ParentCategory' : {'S' : dictionary['ParentCategory']}, 'Value' : {'N' : dictionary['Value']}, 'Description' : {'S' : dictionary['Description']}, 
    'CreatedAt' : {'S' : dictionary['CreatedAt']}})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  The Bad Code
&lt;/h4&gt;

&lt;p&gt;Is code bad if it works? Probably yes.&lt;br&gt;
Why was it bad? A lack of concurrency and an overall inefficient design. &lt;/p&gt;

&lt;p&gt;I replicated the process webhook code for the provision new user lambda. Makes sense right? It does basically the same thing just on a larger scale&lt;/p&gt;

&lt;p&gt;Here's the code block as it stands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def write_to_dynamo(dictionary):
    dynamodb = boto3.client('dynamodb')
    a = 0
    for transaction in dictionary['id']:
        dynamodb.put_item(TableName='quicksightTest', Item={'TransactionID':{'S': dictionary['id'][a]},'Category':{'S': dictionary['category'][a]}, 
        'ParentCategory' : {'S' : dictionary['parentCategory'][a]}, 'Value' : {'N' : dictionary['value'][a]}, 'Description' : {'S' : dictionary['description'][a]}, 
        'CreatedAt' : {'S' : dictionary['createdAt'][a]}})
        a += 1


def lambda_handler(event, context):
    dictionary = create_Dictionary()
    write_to_dynamo(dictionary)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I first tried running this, it timed out. Nothing to worry about, the lambda limit was only set to a minute, a large transaction history will take longer than that to download.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZSq_zFm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3nshv827dhcilbuz0gw9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZSq_zFm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3nshv827dhcilbuz0gw9.PNG" alt="Lambda Timeout" width="202" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the end I had to increase the lambda timeout to it's max of 15 minutes, and my lambda resource ran for ~13 minutes. This is clearly not ideal as my transaction history will only grow in size, meaning there will come a time when this code just does not work. &lt;/p&gt;

&lt;p&gt;So what did I do?&lt;/p&gt;

&lt;p&gt;Nothing.&lt;/p&gt;

&lt;p&gt;If this was on the process webhook side I would have been forced to refactor and create a more efficient solution, but this function is only run once to provision a new user, it can afford to be inefficient for now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_SeUWyJd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhs6mek6zsheo2nbj0w9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_SeUWyJd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhs6mek6zsheo2nbj0w9.PNG" alt="Dynamo Count" width="717" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though it's ugly, it worked, I now have 4,418 records in DynamoDB and more being added with every webhook event. Although it got me thinking about how I'm handling the compute, there are many ways I can improve the whole package. Stay tuned for those updates.&lt;/p&gt;

&lt;h4&gt;
  
  
  Today's Progress
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rEriEwj1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xmvuxzz5uzmydddn0z2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rEriEwj1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xmvuxzz5uzmydddn0z2b.png" alt="After Diagram" width="880" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Created a table in DynamoDB&lt;/li&gt;
&lt;li&gt;Wrote a function that writes to DynamoDB&lt;/li&gt;
&lt;li&gt;Imported my entire transaction history to DynamoDB!!!&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Next Post
&lt;/h4&gt;

&lt;p&gt;Next up we will go through querying dynamoDB with Athena, and bringing our financial records into Quicksight - this is the fun stuff!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>api</category>
      <category>dynamodb</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Creating An Automated Personal Finances Dashboard With AWS - Part 3 (Webhook Lambda)</title>
      <dc:creator>bradenrichardson</dc:creator>
      <pubDate>Wed, 15 Dec 2021 13:08:12 +0000</pubDate>
      <link>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-3-webhook-lambda-1835</link>
      <guid>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-3-webhook-lambda-1835</guid>
      <description>&lt;h4&gt;
  
  
  Recap
&lt;/h4&gt;

&lt;p&gt;Last session we:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Created a ProvisionNewUser lambda in AWS&lt;/li&gt;
&lt;li&gt;Created a requests layer and attached it to our lambda&lt;/li&gt;
&lt;li&gt;Wrote some Python code that sends a GET request to Up's API endpoint to download our historical data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jDxQI-iH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6k4b5x4qwqzq10jtuj92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jDxQI-iH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6k4b5x4qwqzq10jtuj92.png" alt="Current Progress" width="880" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we can successfully pull our historical data, we need to look to our realtime data. To do this we need to leverage Up's webhooks, in this scenario, Up's webhook sends a POST request to an API endpoint whenever there is a new transaction on the bank account.&lt;/p&gt;

&lt;h4&gt;
  
  
  Do the thing
&lt;/h4&gt;

&lt;p&gt;First up we need to create a new lambda that will handle our webhooks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VDg7GwxJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ksambnos2v230fdshjf1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VDg7GwxJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ksambnos2v230fdshjf1.PNG" alt="Lambda Dummy Code" width="880" height="696"&gt;&lt;/a&gt;&lt;br&gt;
Let's just put some dummy code in here that prints out the event, this will be helpful later when our lambda is being triggered and we want to see the structure of the request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
print(event)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an API gateway&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j2h2hiqB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yr9w3jkhgvcklfmrktr3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j2h2hiqB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yr9w3jkhgvcklfmrktr3.PNG" alt="Create API Gateway" width="880" height="457"&gt;&lt;/a&gt;&lt;br&gt;
We need to specify an integration for our API to communicate with, API gateway has native support for lambda so this whole process is really easy&lt;/p&gt;

&lt;p&gt;Specify a gateway route&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lQamgo2V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qldix3ak3ohibiwjoi4.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lQamgo2V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qldix3ak3ohibiwjoi4.PNG" alt="Create Gateway Route" width="880" height="306"&gt;&lt;/a&gt;&lt;br&gt;
Our route needs to correspond to it's functionality, we are expecting our API gateway to receive a POST request. The resource path is appended to our endpoint url.&lt;/p&gt;

&lt;p&gt;Create a stage&lt;br&gt;
Default values for this one.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o2FUKweK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kaw2a5ams0di9wtvbioq.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o2FUKweK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kaw2a5ams0di9wtvbioq.PNG" alt="Create API Stage" width="880" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now things start to move around a bit, we need to register a webhook with Up's API - this sounds like a job for our ProvisionNewUser lambda:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create_webhook(invoke_url):
    api_url = api_url_base + 'webhooks' 
    data_object = {"data": {"attributes": {"url" : invoke_url}}}
    response = requests.post(api_url, headers=headers, json=data_object)
    print(response.text)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Right now we don't have any way of actually telling the ProvisionNewUser lambda when to register a webhook and when to download a historical data report - this will all change, to start with we will manually call the create_webhook() function from within the lambda. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l3TdEJbe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gyy4ugl7871gtc52vmyo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l3TdEJbe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gyy4ugl7871gtc52vmyo.jpg" alt="CreateWebhook" width="862" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our webhook has been configured, let's test it out by transferring some money within the app. This should send our API gateway a post event, triggering our ProcessWebhook lambda.&lt;br&gt;
Here's the result:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a0KSQdfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/scn0fve7dkp140s4ysjd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a0KSQdfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/scn0fve7dkp140s4ysjd.jpg" alt="Post Event" width="856" height="1183"&gt;&lt;/a&gt;&lt;br&gt;
This is where our print statement comes in handy, the original Up webhook payload will be wrapped with the API gateway meta data, now we can build a statement to access the data that we need.&lt;/p&gt;

&lt;p&gt;The data we get from this payload isn't actually the information we want, it's just a transaction ID, we need to go back to Up's API and request the rest of the details using that transaction ID.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def retrieve_transaction(transaction_id):
    api_url = api_url_base + 'transactions/' + transaction_id 
    response = requests.get(api_url, headers=headers)
    data = response.json()
    dictionary = {
        'ID' : transaction_id,
        'Description' : data.get('data').get('attributes').get('description'),
        'Value' : data.get('data').get('attributes').get('amount').get('value')[1:],
        'Created At' : data.get('data').get('attributes').get('createdAt')
    }
    if data.get('data').get('attributes').get('amount').get('value') &amp;lt; 0:
        pass
    if data.get('data').get('relationships').get('category').get('data'):
        dictionary['Category'] = data.get('data').get('relationships').get('category').get('data').get('id')
    else:
        dictionary['Category'] = 'Uncategorized'
    if data.get('data').get('relationships').get('parentCategory').get('data'):
        dictionary['Parent Category'] = data.get('data').get('relationships').get('parentCategory').get('data').get('id')
    else:
        dictionary['Parent Category'] = 'Uncategorized'
    return dictionary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're requesting data again - this requires the requests layer that we previously built, but this time applied to the ProcessWebhook lambda.&lt;/p&gt;

&lt;p&gt;Currently this function doesn't actually do anything, we need to insert the following into the lambda handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;transaction_id = event.get('body').get('data').get('relationships').get('transaction').get('data').get('id')
transaction = retrieve_transaction(transaction_id)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rather than always triggering the webhook, we have captured the JSON payload once before so we can just reuse that. Create a test event in lambda and paste in the previous payload you received, this will make development much easier!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---GqWvUak--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yfdtkxteqk4oqisp010x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---GqWvUak--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yfdtkxteqk4oqisp010x.jpg" alt="JSON Payload" width="880" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test the lambda with the test event&lt;/p&gt;

&lt;p&gt;Finally we're ready to see how our lambda would react to a webhook&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EnKD9YIv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6xq33yw7lligi2xol8h.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EnKD9YIv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6xq33yw7lligi2xol8h.PNG" alt="Transaction Payload" width="409" height="187"&gt;&lt;/a&gt;&lt;br&gt;
Great stuff! It recognised that I transferred $7.88 to savings, this was the original webhook payload that came through earlier. Now whenever a webhook payload hits our API, our lambda will automatically request the contents of that transaction ID and return it as a dictionary.&lt;/p&gt;

&lt;h4&gt;
  
  
  Summary
&lt;/h4&gt;

&lt;p&gt;In this post we covered:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Updating our ProvisionNewUser lambda to configure webhooks&lt;/li&gt;
&lt;li&gt;Creating a lambda to process a webhook &lt;/li&gt;
&lt;li&gt;Creating API infrastructure to trigger a lambda&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aO8XkqBc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8k1aznt5jtunuhkui63a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aO8XkqBc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8k1aznt5jtunuhkui63a.png" alt="Updated Progress" width="880" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Next Steps
&lt;/h4&gt;

&lt;p&gt;Up next we cover writing to a csv file in an S3 bucket, not long now until we can visualize our data!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>python</category>
      <category>api</category>
    </item>
    <item>
      <title>Creating An Automated Personal Finances Dashboard With AWS - Part 2 (Request Lambda)</title>
      <dc:creator>bradenrichardson</dc:creator>
      <pubDate>Tue, 14 Dec 2021 12:12:32 +0000</pubDate>
      <link>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-2-request-lambda-1chi</link>
      <guid>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-2-request-lambda-1chi</guid>
      <description>&lt;p&gt;G'day folks, last time we spoke we covered the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pain points of my manual solution&lt;/li&gt;
&lt;li&gt;Use cases of an automated solution&lt;/li&gt;
&lt;li&gt;Basic MVP design&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this post we're going to tackle the first obstacle, the Provision New User lambda.&lt;/p&gt;

&lt;p&gt;Now these two lambdas could be packaged as a single lambda, with separate functions for each use case, however for the sake of simplicity we are going to split these out, and in the future they will probably end up together in some fashion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda Overview:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Provision New User

&lt;ul&gt;
&lt;li&gt;Sends a GET request&lt;/li&gt;
&lt;li&gt;Manually triggered&lt;/li&gt;
&lt;li&gt;Normally only used once or when data needs to be completely refreshed&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Process Webhook

&lt;ul&gt;
&lt;li&gt;Receives a POST request&lt;/li&gt;
&lt;li&gt;Triggered every time a transaction is registered through a webhook (we'll get to that later)&lt;/li&gt;
&lt;li&gt;This is the core of the automated dashboard and will provide real time data&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Plan Of Action
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a lambda in AWS&lt;/li&gt;
&lt;li&gt;Write some python code that sends a GET request to Up's API endpoint and stores the transaction history in a dictionary&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sounds simple enough right?&lt;/p&gt;

&lt;p&gt;Let's crack on with it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a lambda function in AWS
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the AWS console and create a new Lambda function, we're choosing Python 3.9 as our runtime and leaving everything else default. Note the default execution role, this will come up later when we look at deploying our solution with Cloudformation!
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qF8oUowK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4k6c9qg0ulv1vo57xkox.PNG" alt="Create Lambda Function" width="880" height="658"&gt;
&lt;/li&gt;
&lt;li&gt;Write some code to hit the UP Api endpoint

&lt;ul&gt;
&lt;li&gt;This code filters through the response and grabs all of the relevant info, putting it all into a dictionary of arrays
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import os
import requests

api_token = os.getenv('api_token')
api_url_base = 'https://api.up.com.au/api/v1/'
headers = {'Authorization': 'Bearer {}'.format(api_token)}

def create_list(api_url):
    response = requests.get(api_url, headers=headers)
    if response.status_code == 200:
        data = []
        data.append(response.json().get('data'))
        if response.json().get('links').get('next'):
            token = response.json().get('links').get('next')
            while token:
                response = requests.get(token, headers=headers)
                data.append(response.json().get('data'))
                token = response.json().get('links').get('next')
                if token:
                    print("Processing token: {}".format(token))
                else:
                    print("Finished processing tokens")
        return data
    else:
        print(response.status_code)

def create_csvDictionary():
    api_url = api_url_base + 'transactions'
    data = create_list(api_url)
    csvDictionary = {'id' : [], 'description' : [], 'value' : [], 'category' : [], 'parentCategory' : [], 'createdAt' : []}

    for array in data:
        for transaction in array:
            if 'Transfer' in transaction.get('attributes').get('description'):
                continue
            if 'transfer' in transaction.get('attributes').get('description'):
                continue
            if 'Cover' in transaction.get('attributes').get('description'):
                continue
            if 'Round Up' in transaction.get('attributes').get('description'):
                continue
            if float(transaction.get('attributes').get('amount').get('value')) &amp;gt; 0:
                continue
            else:
                csvDictionary['id'].append(transaction.get('id'))
                csvDictionary['description'].append(transaction.get('attributes').get('description'))
                csvDictionary['value'].append(transaction.get('attributes').get('amount').get('value')[1:])
                if transaction.get('relationships').get('category').get('data'):
                    csvDictionary['category'].append(transaction.get('relationships').get('category').get('data').get('id'))
                else:
                    csvDictionary['category'].append('Uncategorized')
                if transaction.get('relationships').get('parentCategory').get('data'):
                    csvDictionary['parentCategory'].append(transaction.get('relationships').get('parentCategory').get('data').get('id'))
                else:
                    csvDictionary['parentCategory'].append('Uncategorized')
                csvDictionary['createdAt'].append(transaction.get('attributes').get('createdAt'))

    print(csvDictionary)
    return csvDictionary


def lambda_handler(event, context):
    create_csvDictionary()



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our first roadblock! Our lambda doesn't have the requests library, how are we going to sort this one out?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zO6xDRU---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wqq10wu6nto1k5vubhze.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zO6xDRU---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wqq10wu6nto1k5vubhze.PNG" alt="Requests Error" width="775" height="155"&gt;&lt;/a&gt;&lt;br&gt;
There are a couple of ways to solve this problem, my favourite one is to create a lambda layer, as these are then reusable across your AWS account with other functions. &lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a lambda layer
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install the package locally (specify --no-user tag if on Windows)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install requests --target .\requests --no-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Zip the package&lt;/li&gt;
&lt;li&gt;Upload to the Lambda Layer console, ensuring your runtime is correct&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jIWhopd6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2b1cpgqcnscp05kgjivz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jIWhopd6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2b1cpgqcnscp05kgjivz.PNG" alt="Upload Layer" width="880" height="835"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add the layer to your lambda function&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9YDmnv8f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg6qsvup5ytze6zde3kn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9YDmnv8f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg6qsvup5ytze6zde3kn.PNG" alt="Add Layer" width="880" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Okay so now our lambda function has access to the request library, we're going to try this block of code out and see how it goes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KGoEMoEv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99f5alqsgblc33dr8flh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KGoEMoEv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99f5alqsgblc33dr8flh.PNG" alt="Processing Tokens" width="815" height="766"&gt;&lt;/a&gt;&lt;br&gt;
Awesome - It's hitting the Up API endpoint and processing tokens!&lt;/p&gt;

&lt;p&gt;Not Awesome - The lambda initially timed out because there is so much data there (2 years worth of transactions)&lt;/p&gt;

&lt;h2&gt;
  
  
  In Summary
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;We built a Lambda function that sends a GET request to an API endpoint and receives a response back&lt;/li&gt;
&lt;li&gt;We created a Requests lambda layer to enable the GET request and attached it to our lambda&lt;/li&gt;
&lt;li&gt;The Lambda function takes A LONG time to pull a historical dataset... let's revisit this in the future but for now - it works.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Next Time
&lt;/h2&gt;

&lt;p&gt;Next post will be covering the Webhook (event based) lambda function and the corresponding API gateway and endpoint that will be required. Can't wait!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>api</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Creating An Automated Personal Finances Dashboard With AWS - Part 1 (Vision)</title>
      <dc:creator>bradenrichardson</dc:creator>
      <pubDate>Tue, 14 Dec 2021 11:09:34 +0000</pubDate>
      <link>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-1-vision-5e24</link>
      <guid>https://dev.to/bradenrichardson/creating-an-automated-personal-finances-dashboard-with-aws-part-1-vision-5e24</guid>
      <description>&lt;p&gt;I like keeping track of things.&lt;/p&gt;

&lt;p&gt;In fact, I like keeping track of things so much that I joined a bank that gave me in depth analytics and insights into my spendings and savings (&lt;a href="https://up.com.au/"&gt;UP Bank&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Up is a pretty new bank here in Australia and they've managed to create a really cool banking platform with a nifty app. Perhaps my biggest gripe though is that there is no desktop application, and their insights and analytics aren't very customizable and there aren't any out of the box 3rd party integrations.&lt;/p&gt;

&lt;p&gt;I had caught the UP bug and I wanted even deeper insight into my finances, I wanted to be able to piece together correlations and look at long term trends.&lt;/p&gt;

&lt;p&gt;And then UP released their API, an answer to my prayer. The seed was planted and it was time to get to work.&lt;/p&gt;

&lt;p&gt;Within days I had built a python script that downloaded my entire transaction history to a csv file, I would then import that CSV into Quicksight manually. This was amazing, the rush was unreal, but after I had to repeatedly perform the manual steps over and over I knew this wouldn't work long term, it was crying out to be automated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do I want an automated financial dashboard?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;To gain insight into current financial state&lt;/li&gt;
&lt;li&gt;To deep dive historical data and identify correlations&lt;/li&gt;
&lt;li&gt;To forecast future financial state and identify trends&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  How am I going to do this?
&lt;/h3&gt;

&lt;p&gt;Good question, right now I actually don't know what the end product will look like.&lt;/p&gt;

&lt;p&gt;Instead I've chosen to identify an MVP (minimum viable product) and to try to replicate my original workflow as close as possible, but substituting manual tasks for automated ones. This way I get the most value in the quickest way. After I've delivered v0.1 I can then look at the next jump that will provide the most value from the least amount of effort.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J3DORMzV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvdhp9ovr7w9qrpy371h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J3DORMzV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvdhp9ovr7w9qrpy371h.png" alt="v0.1 Diagram" width="880" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's expand on a few core decisions that have been made:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS - My cloud provider of choice&lt;/li&gt;
&lt;li&gt;Lambda - Severless makes the most sense when lifting and shifting python scripts&lt;/li&gt;
&lt;li&gt;S3 - This also most closely represents the current workflow, perfect for an MVP&lt;/li&gt;
&lt;li&gt;Cloudformation - Baking everything into a cloudformation script will allow us to easily grow the solution without having to worry about cleaning up leftover or misconfigured resources in AWS&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Next steps
&lt;/h3&gt;

&lt;p&gt;Stay tuned for the next part which will cover creating and configuring the Provision User lambda function!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>lambda</category>
      <category>api</category>
    </item>
  </channel>
</rss>
