<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jason Shen</title>
    <description>The latest articles on DEV Community by Jason Shen (@timetxt).</description>
    <link>https://dev.to/timetxt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/timetxt"/>
    <language>en</language>
    <item>
      <title>Monitoring SES Email Events though Configuration Set with SNS, SQS, Lambda and CloudWatch Log Groups</title>
      <dc:creator>Jason Shen</dc:creator>
      <pubDate>Fri, 01 Sep 2023 11:28:03 +0000</pubDate>
      <link>https://dev.to/timetxt/monitoring-ses-email-events-though-configuration-set-with-sns-sqs-lambda-and-cloudwatch-log-groups-3a8c</link>
      <guid>https://dev.to/timetxt/monitoring-ses-email-events-though-configuration-set-with-sns-sqs-lambda-and-cloudwatch-log-groups-3a8c</guid>
      <description>&lt;p&gt;When you are sending emails through Amazon SES service, you need to track sending events like Bounce and Complaint. You can use those events to adjust your mail list and eventually maintain reputation of your SES account, so you don't have to face unwanted status of your SES account like sending pause and review situation. &lt;/p&gt;

&lt;p&gt;Amazon SES provides sending events through email feedback and event notification. When Configuration Set feature is available, now you can publish more events to multiple destinations. For example, you can publish Subscription event to CloudWatch metric. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cPC3B52y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/icafwroqvfkgb281wrns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cPC3B52y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/icafwroqvfkgb281wrns.png" alt="SES Sending Notification" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, I will show you how to set up a serverless workflow to monitor sending events of your SES account by using the following AWS services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon SES &lt;/li&gt;
&lt;li&gt;Amazon Simple Notification Service (SNS)&lt;/li&gt;
&lt;li&gt;Amazon Simple Queue Service (SQS)&lt;/li&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;Amazon CloudWatch log groups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w1JOAAs9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eoh317md7w9iu5pcs8fx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w1JOAAs9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eoh317md7w9iu5pcs8fx.png" alt="SES Sending events through SNS and serverless" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1#. Follow SES document "&lt;a href="https://docs.aws.amazon.com/ses/latest/dg/send-email.html"&gt;Set up email sending with Amazon SES&lt;/a&gt; " and setup your SES account. If your SES account is in sandbox, you will need to verify both sender and recipient email addresses.&lt;/p&gt;

&lt;p&gt;2#. Follow SES document "&lt;a href="https://docs.aws.amazon.com/ses/latest/dg/creating-configuration-sets.html"&gt;Creating configuration sets in SES&lt;/a&gt;" and create a Configuration Set.&lt;/p&gt;

&lt;p&gt;3#. Follow SQS document "&lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/step-create-queue.html"&gt;Create a queue (console)&lt;/a&gt;" and create SQS queue with standard queue type.&lt;/p&gt;

&lt;p&gt;4#. Follow SNS document "&lt;a href="https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html#step-create-topic"&gt;Getting started with Amazon SNS&lt;/a&gt;" and create SNS topic.&lt;/p&gt;

&lt;p&gt;5#. Create a SNS subscription with Protocol type "SQS", put SQS ARN of SQS queue created in step 3.&lt;/p&gt;

&lt;p&gt;6#. Update Access Policy of SQS queue to allow SNS topic to publish events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Id": "__default_policy_ID",
  "Statement": [
    {
      "Sid": "__owner_statement",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::&amp;lt;AWS_Account_ID&amp;gt;:root"
      },
      "Action": "SQS:*",
      "Resource": "arn:aws:sqs:us-west-2:&amp;lt;AWS_Account_ID&amp;gt;:&amp;lt;SQS_Queue_Name&amp;gt;"
    },
    {
      "Sid": "__sender_statement",
      "Effect": "Allow",
      "Principal": {
        "Service": "sns.amazonaws.com"
      },
      "Action": "SQS:SendMessage",
      "Resource": "arn:aws:sqs:us-west-2:&amp;lt;AWS_Account_ID&amp;gt;:&amp;lt;SQS_Queue_Name&amp;gt;"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7#. Create a Lambda function using Python version &amp;gt; 3.9&lt;/p&gt;

&lt;p&gt;The function will be triggered by SQS queue and store events into CloudWatch log groups based on SES message ID. The function will create CloudWatch log group if not existing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from datetime import datetime
from time import time
import random

def lambda_handler(event, context):
    #print(event)
    for record in event['Records']:
        print("record")
        payload = record["body"]
        print(payload)
        # get lambda running region
        region = context.invoked_function_arn.split(":")[3]
        # get lambda running version
        version = context.function_version
        # get lambda running name
        name = "ses-notification-" + region
        mail = json.loads(payload)['mail']
        #print("mail")
        #print(mail)
        messageID = mail['messageId']
        print(messageID)
        logGroupNamePrefix="/aws/lambda/" + name
        # define CloudWatch log group client in the region
        client = boto3.client('logs', region_name=region)
        # check cloudwatch log group with name "ses-notification-cw-logs" existence
        response = client.describe_log_groups(logGroupNamePrefix=logGroupNamePrefix)
        # if log group not found, create it
        if len(response["logGroups"]) == 0:
            client.create_log_group(logGroupName=logGroupNamePrefix)
            print("Log group created")
        # get current date in form as year/month/day
        currentDay = str(datetime.now().day)
        currentMonth = str(datetime.now().month)
        currentYear = str(datetime.now().year)
        date = currentYear + "/" + currentMonth + "/" + currentDay
        logStreamsName = date + "/[" + version + "]" + messageID
        # create CloudWatch log stream
        responseLogStream=client.describe_log_streams(logGroupName=logGroupNamePrefix,logStreamNamePrefix=logStreamsName)
        if len(responseLogStream["logStreams"]) == 0:
            client.create_log_stream(logGroupName=logGroupNamePrefix, logStreamName=logStreamsName)
            print("Log stream created")
        # put payload into CloudWatch log stream with current timestamp
        #payload = "this is test payload"
        client.put_log_events(logGroupName=logGroupNamePrefix, logStreamName=logStreamsName, logEvents=[{"timestamp": int(time() * 1000), "message": payload}])
        print("Payload put into log stream")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;8#. The IAM role of the Lambda function must have the following permission in IAM managed policy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudWatchLogsFullAccess&lt;/li&gt;
&lt;li&gt;AWSLambdaSQSQueueExecutionRole&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;9#. In the SQS queue, you need to follow "&lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-lambda-function-trigger.html"&gt;Configuring a queue to trigger an AWS Lambda function (console)&lt;/a&gt;" and add the Lambda function as trigger.&lt;/p&gt;

&lt;p&gt;10#. Create Event Destination in Configuration Set created in step 2 by following "&lt;a href="https://docs.aws.amazon.com/ses/latest/dg/event-publishing-add-event-destination-sns.html"&gt;Set up an Amazon SNS event destination for event publishing&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;11#. Now you can send an email by using &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/send-email.html"&gt;SMTP, AWS CLI or API&lt;/a&gt;  through your SES account. Remember that you need to &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/using-configuration-sets-in-email.html"&gt;specify the Configuration Set in your sending action&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>sns</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Amazon S3 - Web Based Upload Object with POST request and Presigned URL in Python Boto3</title>
      <dc:creator>Jason Shen</dc:creator>
      <pubDate>Tue, 22 Aug 2023 13:21:41 +0000</pubDate>
      <link>https://dev.to/timetxt/amazon-s3-web-based-upload-object-with-post-request-and-presigned-url-in-python-boto3-5be5</link>
      <guid>https://dev.to/timetxt/amazon-s3-web-based-upload-object-with-post-request-and-presigned-url-in-python-boto3-5be5</guid>
      <description>&lt;p&gt;In this article, I will show you how to generate S3 Presigned URL for HTTP POST request with AWS SDK for Boto3(Python). The unique part of this article is that I will show you how to apply Server Side Encryption with KMS key, Tagging objects, Updating Object Metadata and more with S3 Presigned URL for HTTP POST.&lt;/p&gt;

&lt;p&gt;When using S3, there is a scenario about &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html#sigv4-UsingHTTPPOST-how-to"&gt;"Broswer-Based Uploads Using HTTP POST"&lt;/a&gt;. However, it is required to calculate AWS SigV4 Signature to follow the section.&lt;/p&gt;

&lt;p&gt;Instead of calculating the signature by you own codes, you can actually use AWS Boto3 SDK with method &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/generate_presigned_post.html"&gt;"generate_presigned_post"&lt;/a&gt; to generate S3 PreSigned URL. This is not only saving your time to debug "Signature Mismatch" error with your own codes, you don't have to figure out requirements of crypto modules used by your codes to generate right signature. It will all be handled by AWS SDK.&lt;/p&gt;

&lt;p&gt;For example, you owns an S3 bucket in your account. One customer of yours is running a business to allow users of the customer to upload images. The images will be directly uploaded from the customer's website into your S3 bucket. The customer is not familiar with Amazon S3 service and does not own an AWS account, so you need to provide your customer an easy method uploading objects from the customer website directly into your S3 bucket. At the meantime, you don't need to make your bucket public for uploading objects.&lt;/p&gt;

&lt;p&gt;This is where S3 Presigned URL is needed. You can generate the S3 Presigned URL for HTTP POST from AWS Lambda function by having these &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/security-overview-aws-lambda/benefits-of-lambda.html"&gt;benefits&lt;/a&gt;. Then you can provide the S3 Presigned URL with your customer to integrate into the customer's website.&lt;/p&gt;

&lt;p&gt;But you might ask this question:&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are you not using S3 Presigned URL for PutObject API call?
&lt;/h2&gt;

&lt;p&gt;S3 Presigned URL for HTTP POST from broswer-based uploads provides a unique feature. You can define "starts-with" condition in the policy. You and your customers can both have some controls on requirements of the uploaded objects. &lt;/p&gt;

&lt;p&gt;For example, you only want your customer to upload text files, so you can use the following "start-with" condition to restrict value of "Content-Type" starting with "plain" in uploading request. The uploading request is created from your customer's website. The value of "Content-Type" request header is set when a file is being uploaded from your customer's website by using your S3 Presigned URL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;["starts-with", "$Content-Type", "plain"],
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In document of AWS SDK for Boto3, it did not share much information regarding how to use "Fields" and "Conditions" parameters mentioned at &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/generate_presigned_post.html"&gt;"generate_presigned_post"&lt;/a&gt;. It took me some time to figure it out, so I added my understanding in the code example.&lt;/p&gt;

&lt;p&gt;I hope they will save your time in your code development.&lt;/p&gt;

&lt;p&gt;Here is the Python Code Example. Before you test it, you will need to update the constants to match your resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import requests
from botocore.config import Config

ACCESS_KEY="AKIAIOSFODNN7EXAMPLE"
SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

BUCKET_NAME="example-bucket-name"
OBJECT_NAME="example-key-name"
REGION_LOCATION="ap-southeast-2"

KMS_KEY_ARN="arn:aws:kms:&amp;lt;region&amp;gt;:&amp;lt;account-id&amp;gt;:key/&amp;lt;key-id&amp;gt;"

EXPIRATION_TIME = 60*60 * 12 # 12 hours

TEST_FILE_NAME="/Absolute/Path/To/Local/FileName"

my_config = Config(
    region_name = REGION_LOCATION,
    signature_version = 'v4',
    retries = {
        'max_attempts': 10,
        'mode': 'standard'
    }
)

# define S3 client in ap-southeast-2 region
s3=boto3.client('s3',
                 aws_access_key_id=ACCESS_KEY,
                 aws_secret_access_key=SECRET_ACCESS_KEY,
                 config=my_config)


fields={
    "tagging": "&amp;lt;Tagging&amp;gt;&amp;lt;TagSet&amp;gt;&amp;lt;Tag&amp;gt;&amp;lt;Key&amp;gt;type&amp;lt;/Key&amp;gt;&amp;lt;Value&amp;gt;test&amp;lt;/Value&amp;gt;&amp;lt;/Tag&amp;gt;&amp;lt;/TagSet&amp;gt;&amp;lt;/Tagging&amp;gt;",
    "x-amz-storage-class": "STANDARD_IA",
    "Cache-Control": "max-age=86400",
    "success_action_status": "200",
    "x-amz-server-side-encryption": "aws:kms",
    "x-amz-server-side-encryption-aws-kms-key-id": KMS_KEY_ARN,
    "x-amz-server-side-encryption-bucket-key-enabled": "True"
    # "acl": "public-read"
    }


conditions=[
    {
        "x-amz-storage-class": "STANDARD_IA"
    },
    ["starts-with", "$Content-Type", "plain"],
    {
        "tagging": "&amp;lt;Tagging&amp;gt;&amp;lt;TagSet&amp;gt;&amp;lt;Tag&amp;gt;&amp;lt;Key&amp;gt;type&amp;lt;/Key&amp;gt;&amp;lt;Value&amp;gt;test&amp;lt;/Value&amp;gt;&amp;lt;/Tag&amp;gt;&amp;lt;/TagSet&amp;gt;&amp;lt;/Tagging&amp;gt;"
    },
    {
        "Cache-Control": "max-age=86400"
    },
    {
        "success_action_status": "200"
    },
    {
        "x-amz-server-side-encryption": "aws:kms"
    },
    {
        "x-amz-server-side-encryption-aws-kms-key-id": KMS_KEY_ARN
    },
    # {
    #     "acl": "public-read"
    # },
    {
        "x-amz-server-side-encryption-bucket-key-enabled": "True"
    }
]

# generate S3 Presigned URL for HTTP POST Request
response_presigned_url_post=s3.generate_presigned_post(
    BUCKET_NAME,
    OBJECT_NAME,
    Fields=fields,
    Conditions=conditions,
    ExpiresIn=EXPIRATION_TIME
)
print(response_presigned_url_post)

# User requests.post to test the URL
post_fields=response_presigned_url_post['fields']

# files={'file': open(TEST_FILE_NAME, 'rb')}
# you will see 403 error
#comment the following line and uncomment the second following line, you will see 200 successful

post_fields["Content-Type"]="application/octet-stream"
#post_fields["Content-Type"]="plain/text"

# file key must be the last key in the "files" parameter(form)
# it is defined at https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html#RESTObjectPOST-requests-form-fields

post_fields["file"]=open(TEST_FILE_NAME, 'rb')
print(post_fields)

# making POST Request
response_post_request=requests.post(response_presigned_url_post['url'], files=post_fields)

# print response, by default status code is 204,
# "success_action_status": "200" change it to 200

print(f'Response Status of POST request with S3 Presigned URL: {response_post_request.status_code}')
print(f'Response Headers of POST request with S3 Presigned URL: {response_post_request.headers}')
print(f'Response Body of POST request with S3 Presigned URL: {response_post_request.text}')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>boto3</category>
      <category>s3</category>
      <category>postobject</category>
    </item>
    <item>
      <title>CloudFront Edge Computing - Dynamic At Edge with CloudFront Functions</title>
      <dc:creator>Jason Shen</dc:creator>
      <pubDate>Mon, 14 Aug 2023 06:53:53 +0000</pubDate>
      <link>https://dev.to/timetxt/cloudfront-edge-computing-dynamic-at-edge-with-cloudfront-functions-374k</link>
      <guid>https://dev.to/timetxt/cloudfront-edge-computing-dynamic-at-edge-with-cloudfront-functions-374k</guid>
      <description>&lt;p&gt;Edge Computing feature is definitely one of the things that I like the most with CloudFront service. When &lt;a href="https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/"&gt;CloudFront Functions&lt;/a&gt; is launched, it makes the feature even better. &lt;/p&gt;

&lt;p&gt;Recently I saw someone asked a question about the following scenario in &lt;a href="https://repost.aws/questions"&gt;re:Post&lt;/a&gt;. I think it is a perfect scenario for CloudFront Functions.&lt;/p&gt;

&lt;p&gt;Files in the S3 origin: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JSON files are under content folder(prefix), e.g. s3://example_bucket/content/a.json&lt;/li&gt;
&lt;li&gt;Image files are under a sub-folder of content folder (prefix) e.g. s3://example_bucket/content/image/a.png&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The files are requested by viewer request like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JSON file: &lt;a href="https://text.example.com/abc/a.json"&gt;https://text.example.com/abc/a.json&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;PNG file: &lt;a href="https://image.example.com/a.png"&gt;https://image.example.com/a.png&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What is going to happen when there was no edge computing in CloudFront service? &lt;/p&gt;

&lt;p&gt;For the JSON file, there might be no other options but move the objects under 'content/abc/' folder. Then origin path will be 'content/' in origin setting of the distribution.&lt;/p&gt;

&lt;p&gt;For the PNG file, it will require a separate origin setting in the distribution configuration other than the JSON file because they are not going to share the same origin path. The PNG file will need origin path as 'content/image'.&lt;/p&gt;

&lt;p&gt;The JSON file and PNG files will also need different behaviors because they need to call different origin settings.&lt;/p&gt;

&lt;p&gt;It looks like lots of work. &lt;/p&gt;

&lt;p&gt;With CloudFront Functions, a few lines of codes will resolve the problem.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function handler(event) {
    var request = event.request;
    var uri = request.uri;
    var file_name = uri.split('/').pop();

    // Check whether the URI is missing a file name.
    if ( file_name.endsWith('.json')) {
        request.uri = "/content/" + file_name;
        return request;
    } else if (file_name.endsWith('.png')) {
        request.uri = '/content/image/' + file_name;
        return request;
    }

    return request;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the behavior, the function will be added as Viewer Request trigger in Default behavior with an S3 origin pointing to "s3://example_bucket". &lt;/p&gt;

&lt;p&gt;That is it! &lt;/p&gt;

&lt;p&gt;And if any further requirement is required, e.g. changing path based on the requesting domain name, only a few modifications in codes will meet the requirement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function handler(event) {
    var request = event.request;
    var uri = request.uri;
    var file_name = uri.split('/').pop();
    var host = request.headers.host.value;

    // Check whether the URI is missing a file name.
    if ( host === "text.example.com" ) {
        request.uri = "/content/" + file_name;
        return request;
    } else if ( host === "image.example.com" )) {
        request.uri = '/content/image/' + file_name;
        return request;
    }

    return request;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CloudFront Functions are perfect to process those 'small' modifications in viewer requests and responses. &lt;/p&gt;

&lt;p&gt;The code I am using is modified from this &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/example-function-add-index.html"&gt;code example&lt;/a&gt; in CloudFront public document.&lt;/p&gt;

&lt;p&gt;I am seeing a trend that generative AI tools like ChatGPT or Amazon CodeWhisperer will help to writ those scripts much easier than ever! &lt;/p&gt;

&lt;p&gt;Like Lambda@Edge, there are some &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/functions-javascript-runtime-features.html"&gt;restrictions&lt;/a&gt; on runtime of CloudFront Functions that you should have a read before wring your codes. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudfront</category>
      <category>cdn</category>
    </item>
    <item>
      <title>The Differences In Sending Email Actions Between SES Version 1 and Version 2 APIs</title>
      <dc:creator>Jason Shen</dc:creator>
      <pubDate>Mon, 07 Aug 2023 03:05:28 +0000</pubDate>
      <link>https://dev.to/timetxt/the-differences-in-sending-email-actions-between-ses-version-1-and-version-2-apis-2o8n</link>
      <guid>https://dev.to/timetxt/the-differences-in-sending-email-actions-between-ses-version-1-and-version-2-apis-2o8n</guid>
      <description>&lt;p&gt;If you have been using Amazon SES service for a while, it might not be new to you that Amazon SES is having two versions of API actions available at the moment, &lt;a href="https://docs.aws.amazon.com/ses/latest/APIReference/index.html"&gt;API Reference (version 1)&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/ses/latest/APIReference-V2/index.html"&gt;API v2 Reference&lt;/a&gt;. I tried to find the announcement of the API v2 in &lt;a href="https://aws.amazon.com/new/?whats-new-content-all.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-content-all.sort-order=desc&amp;amp;awsf.whats-new-categories=*all&amp;amp;whats-new-content-all.q=ses&amp;amp;whats-new-content-all.q_operator=AND&amp;amp;awsm.page-whats-new-content-all=3"&gt;What's New with AWS?&lt;/a&gt; page but I did not have the luck. &lt;/p&gt;

&lt;p&gt;Amazon SES service API calls can be separated into two types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Management API actions, which can be recorded in &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/logging-using-cloudtrail.html#service-name-info-in-cloudtrail"&gt;CloudTrail Event History&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sending API actions, which are not recorded in &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/logging-using-cloudtrail.html#service-name-info-in-cloudtrail"&gt;CloudTrail Event History&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, you will read about how I took a peek at the sending API actions in Amazon SES API v1 and v2.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What are these API actions?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I can find the API action names in SES API documents. But I find that I can also get the list of API calls by creating an example IAM policy from IAM console. &lt;/p&gt;

&lt;p&gt;Here is the list of actions I got when I was creating an IAM policy with service name 'SES' and 'SES v2' from IAM console.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;v1 IAM actions&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Action": [&lt;br&gt;
            "ses:SendBounce",&lt;br&gt;
            "ses:SendBulkTemplatedEmail",&lt;br&gt;
            "ses:SendCustomVerificationEmail",&lt;br&gt;
            "ses:SendEmail",&lt;br&gt;
            "ses:SendRawEmail",&lt;br&gt;
            "ses:SendTemplatedEmail"&lt;br&gt;
        ]&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;v2 IAM actions&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Action": [&lt;br&gt;
            "ses:SendBulkEmail",&lt;br&gt;
            "ses:SendCustomVerificationEmail",&lt;br&gt;
            "ses:SendEmail"&lt;br&gt;
        ]&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;What are the differences?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I used SES SDK Boto3 Document in the following comparison.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To use Amazon SES v2 APIs, the service client must be

&lt;code&gt;boto3.client('sesv2')&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SES v2 API &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sesv2/client/send_email.html"&gt;"SendEmail"&lt;/a&gt; provides the same functions in v1 APIs &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ses/client/send_email.html"&gt;"SendEmail"&lt;/a&gt;, &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ses/client/send_raw_email.html"&gt;"SendRawEmail"&lt;/a&gt; and &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ses/client/send_templated_email.html"&gt;"SendTemplatedEmail"&lt;/a&gt;. But only the API &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sesv2/client/send_email.html"&gt;"SendEmail"&lt;/a&gt; in SES v2 supports the feature &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/sending-email-list-management.html"&gt;"list management"&lt;/a&gt;. List Management can help to reduce the chance of get complaints or even hard bounces by sending emails to recipients who have opted out of their previous subscriptions to some email services, e.g. newsletter email.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SES v2 API &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sesv2/client/send_bulk_email.html"&gt;"SendBulkEmail"&lt;/a&gt; has different names of parameters in sending requests with SES v1 &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ses/client/send_bulk_templated_email.html"&gt;"SendBulkTemplatedEmail"&lt;/a&gt;. But values of those parameters should be no difference. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SES v2 API action &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sesv2/client/send_custom_verification_email.html"&gt;"SendCustomVerificationEmail"&lt;/a&gt; has no difference with the &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ses/client/send_custom_verification_email.html"&gt;action&lt;/a&gt; in SES v1 API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SES v2 API actions do not have a definition of action "SendBounce". The action &lt;a href="https://docs.aws.amazon.com/ses/latest/APIReference/API_SendBounce.html"&gt;"SendBounce"&lt;/a&gt; is used to generate and send a bounce message to the sender of an email which I received through Amazon SES. There is a restriction that we can only use this API action on an email up to 24 hours after receiving it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;How do I restrict usage in SES API v1 and v2? &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;SES public document has given an example about &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/control-user-access.html#iam-and-ses-examples-access-specific-ses-api-version"&gt;"Allowing Access to only SES API version 2"&lt;/a&gt;. Relatively, I could modify the condition forcing to use SES API v1. &lt;/p&gt;

&lt;p&gt;Why would I want to do that? One thing in the same document could provide an answer:&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;The SES SMTP interface uses SES API version 2 of ses:SendRawEmail.&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;As an IAM user can &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/smtp-credentials.html#smtp-credentials-convert"&gt;convert AWS credentials into SMTP username and password&lt;/a&gt;, I can use the trick to restrict the IAM user to use or not use SMTP client to send emails through Amazon SES.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;One more difference is the &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/quotas.html#quotas-message"&gt;message size&lt;/a&gt; between using SES API v1 and v2. While size of message (after base64 encoding and including attachments) is 10 MB using SES API v1, the size is 40MB in SES API v2.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ses</category>
      <category>aws</category>
      <category>amazon</category>
    </item>
    <item>
      <title>SES BYODKIM - Streamline Private and Public Key by Python Script</title>
      <dc:creator>Jason Shen</dc:creator>
      <pubDate>Tue, 01 Aug 2023 05:26:28 +0000</pubDate>
      <link>https://dev.to/timetxt/ses-byodkim-streamline-private-and-public-key-by-python-script-dpj</link>
      <guid>https://dev.to/timetxt/ses-byodkim-streamline-private-and-public-key-by-python-script-dpj</guid>
      <description>&lt;p&gt;In Amazon SES document, it states the following: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You have to delete the first and last lines (-----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY-----, respectively) of the generated private(public) key. Additionally, you have to remove the line breaks in the generated private key. The resulting value is a string of characters with no spaces or line breaks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/ses/latest/dg/send-email-authentication-dkim-bring-your-own.html#send-email-authentication-dkim-bring-your-own-configure-identity"&gt;source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the private or public is not streamlined, you won't be able to use them with SES BYODKIM.&lt;/p&gt;

&lt;p&gt;At the meantime, you also need to generate a random string as selector in TXT DNS record publishing the public key&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;selector._domainkey.example.com&lt;/td&gt;
&lt;td&gt;TXT&lt;/td&gt;
&lt;td&gt;p=yourPublicKey&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Replace selector with a unique name that identifies the key.&lt;br&gt;
Here is the Python Script to complete &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/ses/latest/dg/send-email-authentication-dkim-bring-your-own.html#send-email-authentication-dkim-bring-your-own-update-dns"&gt;source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is python script can do the both.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import sys, string, random

def streamline_key(keyLocation):
   keyLines=open(keyLocation).readlines()
   keyStream=[]
   for line in keyLines[1:-1]:
      keyStream.append(line.replace('\n', ''))
   key=''.join(keyStream)
   return key

print(sys.argv)

key1Location=sys.argv[-3]
print(key1Location)

key2Location=sys.argv[-2]
print(key2Location)

domain=sys.argv[-1]
print(domain)

key1Streamline=streamline_key(key1Location)
print(key1Location+" streamline:\n" + key1Streamline )
print("\n")
key2Streamline=streamline_key(key2Location)
print(key2Location+" streamline:\n" + key2Streamline)
print("\n")
selector = ''.join(random.choice(string.ascii_lowercase + string.digits) for i in range(32))
print("public key TXT record name:\n" + selector+'._domainkey.'+domain)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the code with name Run the code 'remove-newline-in-key.py' in the same folder storing private key and public key. Then run the script as following format in command line. My public key name is public.key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;% python remove-newline-in-key.py private.key public.key example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get the following result&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;['remove-newline-in-key.py', 'private.key', 'public.key', 'example.com']&lt;br&gt;
private.key&lt;br&gt;
public.key&lt;br&gt;
example.com&lt;br&gt;
private.key streamline:&lt;br&gt;
MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBAMdPWfzVMYahtkvTvBfXsVr52O2CHUok8JzeoCt1Ou3t1SmDmV3755+ztxGj7nFwUCVFmrT5ZmaaDZ5u7Jd856KmejtlIeuPHBt9wuoaiwI1IohXWZAMGLi+qo+FX1kHk+nKj5nMLNq9dSOE8xXXfmtPcz+B4LACpuQRXNGhqCLlAgMBAAECgYEAkXTq8qdQtrXMSfij3C6xI/kVhPihkZv18jZTZIPw1vXszJhbVIjkWNwarggam7Vg+GKc7pjZT+X8LHU9u60Pio22vi6ZNBQwqe0DlpMx1MtJIht4EwH63CZDSU6jijZUjvdTyKqtoqMHiqUaLz2Iom8LYikmrKImMr6S9PqgBsECQQDsJ+8N4asakc0uUKZkxgQNpoM7fykuFmF9TJcq3K9JHfx8HpvMN9UWNyGDfQqIo/4oFD3LxeheeyltETCNqE91AkEA2A7OE2r+D9uMpAnNyt3SmIRQZzVn+ZHB+0fICFB8L17rt6TnuH2AU6ceuoVzr8vtSWGZ+/sotUGvaIbZvwkHsQJAfoTafv5i8+YfHewZaS3pKAMIlcyHnGhjLITnDBCVXD/TcA/Z+iwDXlaE/vPzu8bYOFK31L8fwdaMGCG4eHwurQJAO2CeO/Hsjrkcxrw3BWi/BtFeM28W+xyWvhM1IyvTZUVl7JtyX16GVPcZ19LzPz4BIWikY/7baiz6IvTkhL7bkQJAZhwo33EVKRcDoavSOWshcWEsp6SNychsdT9R17uEzsZq1RgrB9XNqskTveJvzLeT/aRtuoqB+mpJ1R3ux3VLYw==&lt;/p&gt;

&lt;p&gt;public.key streamline:&lt;br&gt;
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHT1n81TGGobZL07wX17Fa+djtgh1KJPCc3qArdTrt7dUpg5ld++efs7cRo+5xcFAlRZq0+WZmmg2ebuyXfOeipno7ZSHrjxwbfcLqGosCNSKIV1mQDBi4vqqPhV9ZB5Ppyo+ZzCzavXUjhPMV135rT3M/geCwAqbkEVzRoagi5QIDAQAB&lt;/p&gt;

&lt;p&gt;public key TXT record name:&lt;br&gt;
w2hajm6q1zoe0gw1q993shhmqopj5auy._domainkey.example.com&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
  </channel>
</rss>
