DEV Community

Cover image for How to Simulate AWS S3 on Your Local Machine with LocalStack
Faruq Abdulsalam
Faruq Abdulsalam

Posted on

How to Simulate AWS S3 on Your Local Machine with LocalStack

In this article, I’ll walk you through the process of simulating an AWS S3 bucket in your local environment — no AWS account required. By the end, you’ll know how to create S3 buckets locally, upload files to them, and download the uploaded files with ease.

Prerequisites

  1. Docker
  2. Python 3.9+

LocalStack Setup

First, set up your LocalStack environment. If you’ve already done this, feel free to skip this section

LocalStack CLI
The first step is to install the LocalStack CLI(Command Line Interface). This link contains an easy guide on how to install the CLI depending on the OS of your machine.

LocalStack account
After successfully installing the CLI, create a LocalStack account here. This will give you access to your own dashboard, where you can manage your account, retrieve your auth token, manage subscriptions, monitor stack insights, and explore other features.

Note: You don't have to actually pay for any subscription if you are not using LocalStack for commercial purposes. LocalStack offers a Hobby Subscription for enthusiasts like myself. If you're not building for commercial purposes, you can select this option through the Subscriptions page in your dashboard. This will give you access to a wide range of open source as well as pro services and features.

LocalStack Desktop
Next I highly recommend you install the LocalStack Desktop application. This will be give you a visual display of your created services, logs, uploaded files e.t.c. This will be really helpful for conducting sanity checks.

LocalStack Authentication Token
You can find your authentication token on the Auth Tokens page of your dashboard. Make sure to set this variable before starting LocalStack:

export LOCALSTACK_AUTH_TOKEN="your-auth-token"
Enter fullscreen mode Exit fullscreen mode

Configure your environment
Next, set the environment variables in your shell:

export AWS_ACCESS_KEY_ID="test"
export AWS_SECRET_ACCESS_KEY="test"
export AWS_DEFAULT_REGION="eu-west-1"
Enter fullscreen mode Exit fullscreen mode

If your region is different (e.g., eu-central-1, us-east-1, etc.), be sure to update the AWS_DEFAULT_REGION to match.

Note: The variables set using this approach will only persist for the duration of the current shell session and will be cleared when the session ends. If you want to avoid repeating this process every time, you can configure a custom profile to use with LocalStack.

With everything in place, you're good to go! For a quick sanity check, make sure Docker is running on your machine (either using the CLI or Docker Desktop), and then run the following command in your shell to ensure everything is configured correctly:

DEBUG=1 localstack start
Enter fullscreen mode Exit fullscreen mode

The DEBUG=1 command enables debug-level logging for LocalStack, allowing you to easily monitor and troubleshoot what's happening behind the scenes. While you'll see a lot of verbose information during startup, don't worry about it. Simply scroll to the bottom of the logs and look for this:

Ready.
Enter fullscreen mode Exit fullscreen mode

If you see this message, it means LocalStack has started successfully. Now, open the LocalStack Desktop application, and you should see the new container in the list.

Local Environment Setup

Follow the following steps to properly set up your local environment.
1. Create a new directory:

mkdir localstack
Enter fullscreen mode Exit fullscreen mode

2. Move into the new directory:

cd localstack
Enter fullscreen mode Exit fullscreen mode

3. Create a virtual environment:
It’s recommended to create a virtual environment to avoid installing dependencies globally, which might conflict with dependencies for other projects on your machine.

For mac/unix users: 
python3 -m venv env
For windows users: 
py -m venv env
Enter fullscreen mode Exit fullscreen mode

After creating the environment, activate it by running :

For mac/unix users: 
source env/bin/activate
For windows users: 
.\env\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

4. Install the LocalStack AWS CLI(awslocal):

pip install 'awscli-local[ver1]'
Enter fullscreen mode Exit fullscreen mode

This will install AWS CLI v1 in your new environment. Using a virtual environment ensures there’s no clash with an existing AWS CLI v2 installation (if you already have it installed, which is likely). The LocalStack documentation mentions certain limitations with AWS CLI v2, so sticking with v1 is recommended for this setup.

5.Verify the installation:
You can verify that the installation was succesful by running.

aws --version
# OR
awslocal --version
Enter fullscreen mode Exit fullscreen mode

You should see something similar to this

aws-cli/1.36.34 Python/3.9.6 Darwin/24.1.0 botocore/1.35.93
Enter fullscreen mode Exit fullscreen mode

If your output is similar, you're on the right track—let’s move on!

Create an S3 bucket in the LocalStack container

Even though we’re using LocalStack, remember that it’s designed to completely mock AWS services. This means we’ll still use AWS CLI commands to interact with our services.

The key difference is that instead of using the aws command, we’ll use the awslocal command. The reason for this is that awslocal is a thin wrapper around aws—it automatically appends the endpoint URL (your LocalStack URL) to every command you run.

If you decide to use the aws prefix instead, you’ll need to either configure the endpoint URL in your AWS profile or append it manually to every command. This approach is tedious and unnecessary, so I strongly recommend sticking with awslocal.

Create Bucket
Run the command below to create an S3 bucket with the name my-new-bucket

awslocal s3 mb s3://my-new-bucket
Enter fullscreen mode Exit fullscreen mode

You should see the following response:

make_bucket: my-new-bucket
Enter fullscreen mode Exit fullscreen mode

Verify the Bucket Creation via the CLI
To ensure the bucket has been created, list all available buckets using this command:

awslocal s3 ls
Enter fullscreen mode Exit fullscreen mode

The response should display the date, time, and name of the bucket, confirming its creation.

Verify Using LocalStack Desktop
For a visual confirmation, you can use LocalStack Desktop. After all, seeing is believing! 😊

  1. Open the LocalStack Desktop application.
  2. Look for your active container in the container list.
  3. Select the container
  4. At the top center of the application, you’ll see four buttons. Hover over the last button to reveal its name: Resource Browser.
  5. Click Resource Browser, and a list of supported AWS services will appear. Screenshot of localstack desktop
  6. Locate S3 in the list and select it.
  7. Choose your region from the bottom-right corner of the screen.
  8. Refresh the view using the button at the top-right corner. You should now see your created bucket listed. Screenshot of s3 bucket list

Congratulations! 🎉 You’ve just created an S3 bucket without needing an AWS account — pretty amazing, right? 😊

Next Steps

Now that your bucket is ready, let's test it by uploading a file to verify that it works as expected. We are going to create a simple python script to interact with out s3 bucket using the boto3 package.

1. Create your script file in the same directory localstack where your virtual environment was created:

touch base.py
Enter fullscreen mode Exit fullscreen mode

2. Import required libraries:

import boto3
import logging
from io import BytesIO
from botocore.exceptions import ClientError
Enter fullscreen mode Exit fullscreen mode

3. Configure the logger module:

logging.basicConfig(
    level=logging.DEBUG,  # Set the minimum log level (DEBUG, INFO, WARNING, etc.)
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",  # Log format
)

logger = logging.getLogger(__name__)
Enter fullscreen mode Exit fullscreen mode

4. Define Configuration Variables:

AWS_ACCESS_KEY_ID = "test"
AWS_SECRET_ACCESS_KEY = "test"
S3_BUCKET_NAME = "my-new-bucket"
LOCALSTACK_HOST = "http://localhost:4566"  # Default LocalStack endpoint
Enter fullscreen mode Exit fullscreen mode

The AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables are needed to create the client with boto3 but their values are not really important in this situation so you can assign any random value to them. The value assigned to the S3_BUCKET_NAME variable should be the actual name of the bucket you created above. If this is wrong, you'll get this error:

botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
Enter fullscreen mode Exit fullscreen mode

Finally, ensure the LOCALSTACK_HOST variable is be set to your LocalStack endpoint url.

5. Create the S3 Client:
To run a sanity check, let's try to create a new boto3 client and use that to list out s3 buckets.

def create_S3_client():
    s3_client = boto3.client(
        "s3",
        aws_access_key_id=AWS_ACCESS_KEY_ID,
        aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
        endpoint_url=LOCALSTACK_HOST,  # Point to LocalStack endpoint
    )
    return s3_client

def main():
    s3Client = create_S3_client()
    response = s3Client.list_buckets()
    print("Buckets:", response["Buckets"])

main()

Enter fullscreen mode Exit fullscreen mode

Run the script using python3 base.py. Your response should contain a List of your created buckets, in this case it should contain only one item , an object with the Name and CreationDate of the bucket. now that we have validated this. Let's go ahead and finish up the python script.

6. Create the upload method:
We will be uploading an image so please move any jpg image of your choice into the localstack directory where the base.py file is also located. And change the name of the image to file.jpg

def upload_to_s3(file_bytes, filename, mimetype, object_name=None):
    """
    Uploads a file to an S3 bucket

    :param file_bytes: Bytes object of the file to be uploaded
    :param filename: Name of the file
    :param mimetype: MIME type of the file
    :param object_name: Name of the object in the bucket

    :return: True if the file was uploaded, else False
    """

    s3_client = create_S3_client()

    if object_name is None:
        object_name = filename

    try:
        # Wrap the bytes object in a BytesIO object
        file_obj = BytesIO(file_bytes)

        # Upload the file object to S3 bucket
        response = s3_client.upload_fileobj(
            file_obj, S3_BUCKET_NAME, object_name, ExtraArgs={"ContentType": mimetype}
        )
        logger.info(f"{object_name} uploaded to {S3_BUCKET_NAME} bucket")
        logger.info(response)
        return True
    except ClientError as e:
        logger.error(e)
        logger.exception(e)
        return False
Enter fullscreen mode Exit fullscreen mode

7. Read File Data:

def read_file():
    file_bytes = None
    filename = None
    mimetype = None
    with open("file.jpg", "rb") as file:
        file_bytes = file.read()
        filename = f"images/{file.name}"
        mimetype = "image/jpeg"

    return file_bytes, filename, mimetype
Enter fullscreen mode Exit fullscreen mode

Note: By setting the file name like this above f"images/{file.name}" means our file will be uploaded into the images/ directory in the s3 bucket and if it does not exist it will first be created.

8. Create the main method

def main():
    file_bytes, filename, mimetype = read_file()

    status = upload_to_s3(file_bytes, filename, mimetype)

    if status:
        logger.info("File uploaded successfully!")
    else:
        logger.error("File upload failed.")

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

The final code should look like this:

import boto3
import logging
from io import BytesIO
from botocore.exceptions import ClientError

logging.basicConfig(
    level=logging.DEBUG,  # Set the minimum log level (DEBUG, INFO, WARNING, etc.)
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",  # Log format
)

logger = logging.getLogger(__name__)

AWS_ACCESS_KEY_ID = "tester"
AWS_SECRET_ACCESS_KEY = "test"
S3_BUCKET_NAME = "my-new-bucket"
LOCALSTACK_HOST = "http://localhost:4566"  # Default LocalStack endpoint


def upload_to_s3(file_bytes, filename, mimetype, object_name=None):
    """
    Uploads a file to an S3 bucket

    :param file_bytes: Bytes object of the file to be uploaded
    :param filename: Name of the file
    :param mimetype: MIME type of the file
    :param object_name: Name of the object in the bucket

    :return: True if the file was uploaded, else False
    """

    s3_client = create_S3_client()

    if object_name is None:
        object_name = filename

    try:
        # Wrap the bytes object in a BytesIO object
        file_obj = BytesIO(file_bytes)

        # Upload the file object to S3 bucket
        response = s3_client.upload_fileobj(
            file_obj, S3_BUCKET_NAME, object_name, ExtraArgs={"ContentType": mimetype}
        )
        logger.info(f"{object_name} uploaded to {S3_BUCKET_NAME} bucket")
        logger.info(response)
        return True
    except ClientError as e:
        logger.error(e)
        logger.exception(e)
        return False


def create_S3_client():
    s3_client = boto3.client(
        "s3",
        aws_access_key_id=AWS_ACCESS_KEY_ID,
        aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
        endpoint_url=LOCALSTACK_HOST,  # Point to LocalStack endpoint
    )
    return s3_client


def read_file():
    file_bytes = None
    filename = None
    mimetype = None
    with open("file.jpg", "rb") as file:
        file_bytes = file.read()
        filename = f"images/{file.name}"
        mimetype = "image/jpeg"

    return file_bytes, filename, mimetype


def main():
    file_bytes, filename, mimetype = read_file()

    status = upload_to_s3(file_bytes, filename, mimetype)

    if status:
        logger.info("File uploaded successfully!")
    else:
        logger.error("File upload failed.")


if __name__ == "__main__":
    main()

Enter fullscreen mode Exit fullscreen mode

9. Run the script:
Run the script using:

python3 base.py
Enter fullscreen mode Exit fullscreen mode

If your file is uploaded successfully you should see this in your logs.

Screenshot of successful s3 bucket upload.

10. Verify Upload:
Now that you’ve uploaded your file to the S3 bucket my-new-bucket, let’s verify that the file exists..

Option1 - Verify via the CLI:
Run the following command in your terminal:

awslocal s3 ls s3://my-new-bucket/images/
Enter fullscreen mode Exit fullscreen mode

This will return a list of all files in the images/ directory of the my-new-bucket bucket. You should see file.jpg in the response, similar to:

2025-01-07 12:00:00       12345 file.jpg
Enter fullscreen mode Exit fullscreen mode

Option2 - Verify via LocalStack Desktop:

  1. Open LocalStack Desktop and navigate to the S3 service.
  2. Select the bucket my-new-bucket.
  3. Refresh the view, and you should see the images/ directory. Inside, you’ll find your file.jpg file.
  4. To double-check, click on the file row to download it to your local machine. Open the file to ensure it matches the original image you uploaded.

Congratulations! 🎉
You’ve successfully created an S3 bucket, uploaded a file, and verified its existence—all without needing an AWS account. I hope you’ve experienced your wow moments already! This guide demonstrates how simple, straightforward, and fast it is to work with AWS services on your local machine using LocalStack.

With LocalStack, you can truly keep local development local, saving time and resources while ensuring a smoother development workflow.

In the next part of this series, I’ll show you how to mock AWS Lambda functions locally using the LocalStack platform. Stay tuned — it’s going to be another exciting dive into the world of local AWS development! 😊

If you have any questions, feel free to drop them as a comment or send me a message on LinkedIn and I'll ensure I respond as quickly as I can. Ciao 👋

Top comments (0)