1. Useful Resources & Links
2. Experiment Implementation
Docker Image Explanation
I created a custom Docker image for this tutorial. I briefly mentioned its specs below.
script_session.py: Start the job. Download all necessary images.
script_session_process_images.py: Process the images.
script_session_move_images.py: Move the images to
- boto3 (AWS SDK)
Uploading sample images to the S3 Bucket
Create a bucket and create two folders which are
The application will download all files in the
/new, then process the images, and will move the images to the
Upload some images to the
Make sure the bucket is not public. It won't affect following the tutorial, but just for your sake.
We don't need to worry about the Bucket Policy since we are going to use
Go to IAM console and create
Add Elastic Container Service (ECS) as Trusted Entity of the role.
AmazonS3FullAccesspermission (AWS managed policy).
Perfect! We are going to attach the role to the ECS Task later. Attaching the role to tasks is very convenient since we now don't need to worry about how the application inside of ECS container authenticates to AWS services.
Not we can move on to the Task Definition part.
- Name: Any
- Image URI:
- Port Mappings: None
- Environment Variables (required):
- App Environment: Fargate
- Operating system/Architecture: Linux/ARM64 (important)
- CPU: 1 vCPU (does not matter)
- Memory: 3GB (does not matter)
- Task Role:
DemoRoleForECSandS3(the one we created beforehand)
- ... and leave all optional settings as default
Great! Now we are ready to run the ECS container.
Run the Task
- Go to ECS console and create a cluster.
- Go to
Taskstab and click the "Run new task" button. Service and Tasks are a little bit different. For this tutorial, our ECS container will not last for too long time, we are sticking to Task option.
- Specify the Launch type. Choose
- Application type is
- Leave desired tasks option as 1.
- Choose the Family and Revision for the Task Definition we created beforehand.
- Launch the Task
Task Running Process
The application will sleep for 5 mins because of the script
script_session_process_images.py. I put sleep function just to demo the behavior. Feel free to update the script and deploy new Docker image if you want to use the application for the real-world project.
After 5 minutes, all the images in
/new folder will be moved to
/old folder. This can take a little bit of time.
Cloud Watch Logs
Notice: The "Image processing is done!" log printed out earlier than the time when actual job gets done. I think this is because of the way how Fargate works. It seems like Fargate runs another threads to process the rest of the jobs while the current thread is sleeping (not really sure if my explanation is right).
Using ECS can be very easy with Fargate and Task role. Try use ECS for you batch jobs that take lots of time to process!
Top comments (0)