Amazon Simple Storage Service (Amazon S3) is a service in AWS that’s made up of two parts, for the most part. It’s one part buckets and another part objects. The main task for a bucket is to provide a place for objects. Now, an object is really not much different than a file that you have on your computer or maybe in a file share at a company’s data center. Also, if that’s not enough, Amazon S3 has dozens of features that help companies store files as S3 objects in the cloud to meet all kinds of use cases and business requirements.
One of these really good features is Amazon S3 event notifications. When an object gets added to a bucket, for example, an event notification can be made to tell another resource that a new object was just added to the bucket. When the event notification gets received by the other resource, it can start an action to do something with the new object or tell another resource about the new object. It can get the object and perform some type of processing with the data in the object, like starting an ETL process. Basically, what an ETL process would do is take the data in the file, examine it to be sure it is valid data, perform any modifications or enhancements it wants done to the data, and then put the resultant data into a database, data store or file.
Amazon S3 event notifications don’t capture all of the things that can happen to an object, just the top dozen or so types of actions that AWS thinks you’d be most interested in knowing. The following chart lists what these actions are, but don’t worry too much about memorizing it. Just know that you may be interested in knowing about something happening to an object that is not listed. In that case, what you should do then is start an AWS CloudTrail trail for the bucket or the objects that you are interested in getting alerts about when something happens to that object or an object in a bucket. With AWS CloudTrail, be sure to let it know that you want to log “data events”. Next, you’ll want to integrate AWS CloudTrail with AWS CloudWatch events. This will then allow you to make a metric in AWS CloudWatch for each action you are interested in knowing about and set an alarm when it happens so you can do something. Maybe something like send an Amazon SNS notification, run an AWS Lambda function or publish an event to Amazon EventBridge.
This article isn’t about following that process, it’s being mentioned in case the event you are interested in is not in the following chart that shows the object events that work with Amazon S3 event notifications. This article is about taking one of the actions that works with event notifications and showing how that can create a log entry to CloudWatch Logs so you know when something happened with an object in Amazon S3. Like I said earlier, you can do a lot of other things in addition to logging the event. For example, to can send the event to AWS Glue or AWS Step Functions and start things like an ETL job or tasks in a state machine in AWS Step Functions.
The following chart is from AWS. It shows the event types with event notifications that are available for Amazon SQS, Amazon SNS and AWS Lambda. The event types for Amazon EventBride are the same for the most part, just a few slight differences. If you want to know what they are, you should refer to AWS documentation for them.
Event Types | Description |
---|---|
s3:TestEvent |
When a notification is enabled, Amazon S3 publishes a test notification. This is to ensure that the topic exists and that the bucket owner has permission to publish the specified topic. If enabling the notification fails, you don't receive a test notification. |
s3:ObjectCreated:* s3:ObjectCreated:Put s3:ObjectCreated:Post s3:ObjectCreated:Copy s3:ObjectCreated:CompleteMultipartUpload
|
Amazon S3 API operations such as PUT, POST, and COPY can create an object. With these event types, you can enable notifications when an object is created using a specific API operation. Alternatively, you can use the s3:ObjectCreated:* event type to request notification regardless of the API that was used to create an object.s3:ObjectCreated:CompleteMultipartUpload includes objects that are created using UploadPartCopy for Copy operations. |
s3:ObjectRemoved:* s3:ObjectRemoved:Delete s3:ObjectRemoved:DeleteMarkerCreated
|
By using the ObjectRemoved event types, you can enable notification when an object or a batch of objects is removed from a bucket.You can request notification when an object is deleted or a versioned object is permanently deleted by using the s3:ObjectRemoved:Delete event type. Alternatively, you can request notification when a delete marker is created for a versioned object using s3:ObjectRemoved:DeleteMarkerCreated . For instructions on how to delete versioned objects, see Deleting object versions from a versioning-enabled bucket. You can also use a wildcard s3:ObjectRemoved:* to request notification anytime an object is deleted.These event notifications don't alert you for automatic deletes from lifecycle configurations or from failed operations. |
s3:ObjectRestore:* s3:ObjectRestore:Post s3:ObjectRestore:Completed s3:ObjectRestore:Delete
|
By using the ObjectRestore event types, you can receive notifications for event initiation and completion when restoring objects from the S3 Glacier Flexible Retrieval storage class, S3 Glacier Deep Archive storage class, S3 Intelligent-Tiering Archive Access tier, and S3 Intelligent-Tiering Deep Archive Access tier. You can also receive notifications for when the restored copy of an object expires.The s3:ObjectRestore:Post event type notifies you of object restoration initiation. The s3:ObjectRestore:Completed event type notifies you of restoration completion. The s3:ObjectRestore:Delete event type notifies you when the temporary copy of a restored object expires. |
s3:ReducedRedundancyLostObject |
You receive this notification event when Amazon S3 detects that an object of the RRS storage class is lost. |
s3:Replication:* s3:Replication:OperationFailedReplication s3:Replication:OperationMissedThreshold s3:Replication:OperationReplicatedAfterThreshold s3:Replication:OperationNotTracked
|
By using the Replication event types, you can receive notifications for replication configurations that have S3 Replication metrics or S3 Replication Time Control (S3 RTC) enabled. You can monitor the minute-by-minute progress of replication events by tracking bytes pending, operations pending, and replication latency. For information about replication metrics, see Monitoring replication with metrics, event notifications, and statuses.The s3:Replication:OperationFailedReplication event type notifies you when an object that was eligible for replication failed to replicate.The s3:Replication:OperationMissedThreshold event type notifies you when an object that was eligible for replication that uses S3 RTC exceeds the 15-minute threshold for replication.The s3:Replication:OperationReplicatedAfterThreshold event type notifies you when an object that was eligible for replication that uses S3 RTC replicates after the 15-minute threshold.The s3:Replication:OperationNotTracked event type notifies you when an object that was eligible for live replication (either Same-Region Replication [SRR] or Cross-Region Replication [CRR]) is no longer being tracked by replication metrics. |
s3:LifecycleExpiration:* s3:LifecycleExpiration:Delete s3:LifecycleExpiration:DeleteMarkerCreated
|
By using the LifecycleExpiration event types, you can receive a notification when Amazon S3 deletes an object based on your S3 Lifecycle configuration.The s3:LifecycleExpiration:Delete event type notifies you when an object in an unversioned bucket is deleted. It also notifies you when an object version is permanently deleted by an S3 Lifecycle configuration.The s3:LifecycleExpiration:DeleteMarkerCreated event type notifies you when S3 Lifecycle creates a delete marker when a current version of an object in versioned bucket is deleted. |
s3:LifecycleTransition |
You receive this notification event when an object is transitioned to another Amazon S3 storage class by an S3 Lifecycle configuration. |
s3:IntelligentTiering |
You receive this notification event when an object within the S3 Intelligent-Tiering storage class moved to the Archive Access tier or Deep Archive Access tier. |
s3:ObjectTagging:* s3:ObjectTagging:Put s3:ObjectTagging:Delete
|
By using the ObjectTagging event types, you can enable notification when an object tag is added or deleted from an object.The s3:ObjectTagging:Put event type notifies you when a tag is PUT on an object or an existing tag is updated.The s3:ObjectTagging:Delete event type notifies you when a tag is removed from an object. |
s3:ObjectAcl:Put |
You receive this notification event when an ACL is PUT on an object or when an existing ACL is changed. An event is not generated when a request results in no change to an object’s ACL. |
As was said a little bit earlier, the EventBridge event types that are available with S3 event notifications are a little bit different. This link here will tell you exactly what they are, but for what we are doing here, both have the object created event type available. So the differences here do not matter for this.
Using Amazon EventBridge with Amazon S3 Event Notifications
The rest of this article is going to show an example of an Amazon S3 event notification with the objectcreated event type. The second one in the above chart. I want to use the one with an asterisk so that no matter how the object is created, it will generate an event notification. The event notification, of course, needs to have a destination and for this example, that’ll be Amazon EventBridge. The other three possible options for a destination are Amazon SNS, Amazon SQS and AWS Lambda. If you would rather use one of these options, you should consult the AWS documentation. Last time I looked, it’s right after the charts from above. You want to go to the Amazon S3 user guide, click on “Logging and Monitoring”, then click “Amazon S3 Event Notifications” and finally “Using SQS, SNS and Lambda”. If it hasn’t been changed, this link should also get you to the right place.
Using Amazon SQS, Amazon SNS, and Lambda with Amazon S3 Event Notifications
The first thing that needs to be done is to create three resources in AWS. The first one is an Amazon S3 bucket, which is going to be where the object gets created, and the other ones are an Amazon EventBridge rule and target. After these three things are made, an event notification is configured for the bucket linked with the Amazon EventBridge rule and target. In order to have a destination for the event notification, and see that it was actually passed through EventBridge, a CloudWatch Logs group is made and a service role resource policy is attached to the CloudWatch Log group so EventBridge resources can write log messages to it. Then, whenever an object is added to the bucket, the event notification will be sent to the Amazon EventBridge bus, which then sends it to the CloudWatch Logs group through the EventBridge target.
The basics of Amazon EventBridge is quite simple. One type of resource in Amazon EventBridge is called event buses. An event bus has the ability to receive events and pass the event along to subscribers. If you are familiar with the “publisher/subscriber” design pattern, that is exactly what is going on with this. The types of subscribers that Amazon EventBridge event buses support is very large. It includes about two hundred AWS services and resources in those services, lots and lots of SaaS services and applications from third parties, along with custom applications that you may already be using and contain complex algorithms that still need to be used because of your business requirements and use cases.
If you run into a use case where an object getting put into a bucket needs to be used by multiple departments at your company or organization, then using Amazon EventBridge is great for letting all of them know about the new Amazon S3 object. Another good use case for Amazon EventBridge is when the data in an object needs to be processed in multiple ways. Maybe it needs to be processed by multiple AWS services and resources, some SaaS third-party applications have some things they want to do with the data and some legacy custom applications are still used on-premises to process this object data. Amazon EventBridge is easily the best AWS service in these use cases to make these types of notifications about the new object in the Amazon S3 bucket.
STEP ONE : CREATE A BUCKET IN AMAZON S3
The only two required things to make a bucket, are to decide which region will have the bucket and what the name of the bucket will be. The name of the bucket seems easy enough, but it must be unique among all the buckets in the AWS partition being used, which I bet in the primary public aws partition is in the millions. So you’ll probably want some type of strategy for this. To make a unique name for a bucket, the following things can be put in it. Company names, department names, the date the bucket gets created, the region that contains the bucket and a random GUID. A random GUID can almost always make a unique name for a bucket all on its own. You should still add some type of additional part to the GUID, but it can be very simple and just a single piece of text. Also, this mostly goes without saying, don’t put sensitive information in the bucket name like passwords, AWS account numbers or social security numbers.
Also, naming a bucket requires some validation with the name you want to use in making the bucket. The following listing has the validation rules, so you just need to be sure the name you want to use follows all of these rules.
- Bucket names must be between 3 (min) and 63 (max) characters long.
- Bucket names can consist only of lowercase letters, numbers, periods (.), and hyphens (-).
- Bucket names must begin and end with a letter or number.
- Bucket names must not contain two adjacent periods.
- Bucket names must not be formatted as an IP address (for example,
192.168.5.4
). - Bucket names must not start with the prefix
xn--
. - Bucket names must not start with the prefix
sthree-
. - Bucket names must not start with the prefix
amzn-s3-demo-
. - Bucket names must not end with the suffix
-s3alias
. This suffix is reserved for access point alias names. For more information, see Access point aliases. - Bucket names must not end with the suffix
--ol-s3
. This suffix is reserved for Object Lambda Access Point alias names. For more information, see How to use a bucket-style alias for your S3 bucket Object Lambda Access Point. - Bucket names must not end with the suffix
.mrap
. This suffix is reserved for Multi-Region Access Point names. For more information, see Rules for naming Amazon S3 Multi-Region Access Points. - Bucket names must not end with the suffix
--x-s3
. This suffix is reserved for directory buckets. For more information, see Directory bucket naming rules. - Bucket names must not end with the suffix
--table-s3
. This suffix is reserved for S3 Tables buckets. For more information, see Amazon S3 table bucket, table, and namespace naming rules. - Buckets used with Amazon S3 Transfer Acceleration can't have periods (.) in their names. For more information about Transfer Acceleration, see Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration.
Bonus Material :
So when you have a name you like for the bucket, you can find out if the name already exists by going to a web browser and putting in a URL in the following format.
https://<put your bucket name here>.s3.amazonaws.com/
If the bucket name is available, it will return some XML with a few elements that you want to look at. It will have the first three elements with names of “code”, “message” and “bucketname” in that order. The value of the “code” element needs to say “NoSuchBucket”, the “message” element needs to say “The specified bucket does not exist”, and the “bucketname” element will have the name you entered for a bucket name. If the bucket exists, you’ll most likely get “access denied” type sayings for the first two elements. (public access for the bucket has been blocked) If something not covered and different happens, then you probably are not following the bucket naming rules, or you have access to a bucket that already exists with the name you are checking. Now, when you have a bucket name, you just need to make it before someone else does!
I’m going to use CLI commands and PowerShell commands for the resources that need to be created in this article. If you want to use the console for anything in this article, I’m sure you can find videos on YouTube, or other resources on the internet that will show or tell you what to do. Commands should be entered on a single line. The PowerShell commands are sometimes a set of commands.
CLI command :
aws s3 mb s3://<put your bucket name here> --region <put region for the bucket here>
If the command ran successfully, you’ll see a line under the command that starts with make_bucket :
and than the name of your bucket. It should look like this,
make_bucket:<your bucket name here>
You can use the aws s3 ls
or the aws s3api list-buckets
commands to see a listing of your buckets and verify the addition of this bucket.
PowerShell command :
New-S3Bucket -BucketName <put your bucket name here> -Region <put your region for the bucket here>
If the PowerShell command was successful and the bucket was made, you’ll see a return message with the bucket name and the current date and time (by default UTC time).
You can also use the Get-S3Bucket
PowerShell command to see a listing of your AWS S3 buckets. Also, you can use the Get-S3Bucket -BucketName <your bucket name here>
command to confirm the bucket was created.
The event notification configuration for the bucket needs to be updated so that it sends event notifications to Amazon EventBridge. By default, when an Amazon S3 bucket is created, it will not send event notifications to EventBridge. These commands can be used to set the EventBridge configuration in the S3 bucket so event notifications get sent to EventBride event bus.
CLI command :
aws s3api put-bucket-notification-configuration --bucket <put your bucket name here> --notification-configuration "{\"EventBridgeConfiguration\": {}}" --region <put region for the bucket here>
This command doesn’t return anything, if you don’t see errors than it should be set, looking in the console just to be sure it is set for the bucket is a good idea. With the PowerShell commands, none of them will return anything, it’ll be very clear if something went wrong. Not sure if I said this yet, but EventBridge will get all of the event notifications from an S3 bucket or none of the event notifications. When you reset the event notification for the bucket it just toggles its value. Like a light switch, it just changes it to the one other setting. (If this gets difficult you can just set the setting in the console and move on to the next step)
PowerShell command :
$BucketName = "<put your bucket name here>"
$eventBridgeConfiguration = New-Object Amazon.S3.Model.EventBridgeConfiguration
Write-S3BucketNotification -BucketName $bucketName -EventBridgeConfiguration $eventBridgeConfiguration -Region <put the region for the bucket here(if needed)>
STEP TWO : CREATE A RULE AND TARGET IN THE DEFAULT EVENT BUS IN AMAZON EVENTBRIDGE – THIS ALSO INCLUDES CREATING THE CLOUDWATCH LOGS GROUP AS THE TARGET DESTINATION AND REQUIRED SERVICE-BASED RESOURCE POLICY
Every AWS account has an event bus created in it when the account gets made. The name for it is simply “default”. But for messages to be passed through the default event bus, a rule needs to be created on it. In addition to the rule a target needs to be available for the rule to pass the event. A rule will not work without a target. So lets have a go at getting a rule and target made.
CLI command :
aws events put-rule --name "<put name for event bus rule here>" --event-pattern "{\"source\":[\"aws.s3\"],\"detail-type\":[\"Object Created\"]}" --region <region for the rule here>
If the command completes successfully, you will see the arn value for the rule that was just added.
{
"RuleArn": "<arn for rule just created is here>"
}
PowerShell command :
$BucketName = "<put your bucket name here>"
$ruleName = "<put your event bridge rule name here>"
$eventPatternString = '{"source":["aws.s3"],"detail-type":["Object Created"],"detail":{"bucket":{"name":["' + $BucketName + '"]}}}'
Write-EVBRule -Name $ruleName -EventPattern $eventPatternString -Region <put the region for the bucket here(if needed)>
If the command completes successfully, you will see the arn value for the rule that was just added.
<arn for rule just created is here>
Do take note that the PowerShell is different from the CLI command in that the CLI command is for all buckets in the AWS account, and the PowerShell event pattern will be only for the specified bucket. Rules are used to get just the events from the event bus that are wanted and then the rule sends the event to one or a set of targets.
Top comments (0)