DEV Community

grant horwood
grant horwood

Posted on

laravel: storing stuff in private s3 buckets

a couple of years ago, i did a rescue job on a project where the original developers had rolled their own 'know your customer' feature. it worked moderately well, but one of the more notable and alarming aspects of it was that users uploaded scans of their drivers licenses and these images were stored in a public s3 bucket.

needless to say, leaving your users' private data just drifting around on a fully-public s3 bucket is a Bad Thing, but it happens a lot. in the last five years voting records, personal credit reports, even resumes for pentagon spies have all leaked out of public buckets.

let's avoid that by using private s3 buckets in our laravel api.

the flyover

for this project, we're going to take our existing laravel api and bucket and:

  • add a file upload endpoint that pushes the file to our private s3 bucket
  • persists the data we need to retreive that file in our database
  • build an endpoint that creates a time-limited, signed url to the file
  • design our file storage to be able to use both public and private buckets as appropriate

when we're done, we'll be able to take private user assets and store them more securely than verizon or the pentagon.

prerequisites

it's assumed that we already have a laravel project of at least version 8 with a restful api. we should also have a private s3 bucket and, optionally, a public one.

getting the requirements

this solution is going to be built around the flysystem-aws-s3-v3 package. this is a 'php leauge' package, so you know it's good. we'll add it to our project with composer:

composer require league/flysystem-aws-s3-v3:"~1.0"
Enter fullscreen mode Exit fullscreen mode

configuration

next, we're going to set our configuration values in our .env file.

obviously, we're going to need our aws keys if we're going to access our bucket, and we will also need to set our default region (ie. us-east-1). this all goes in our .env file, which we do not add to our repository.

additionally, we're also going to add the name of two s3 buckets; our private one, and a public one.

our .env file should have entries similar to this:

###
# AWS ACCESS
AWS_ACCESS_KEY_ID=<access key>
AWS_SECRET_ACCESS_KEY=<secret access key>
AWS_DEFAULT_REGION=<aws region, ie us-east-1>

###
# AWS BUCKETS
AWS_BUCKET_PUBLIC=example-assets-public
AWS_BUCKET_PRIVATE=example-assets-private
AWS_USE_PATH_STYLE_ENDPOINT=false
Enter fullscreen mode Exit fullscreen mode

we are then going to load these environment values into our filesystems config file for easy access.

in our config/filesystems.php, we are going to find the array keyed disks and add this:

's3_private' => [
    'driver' => 's3',
    'key' => env('AWS_ACCESS_KEY_ID'),
    'secret' => env('AWS_SECRET_ACCESS_KEY'),
    'region' => env('AWS_DEFAULT_REGION'),
    'url' => env('AWS_URL'),
    'endpoint' => env('AWS_ENDPOINT'),
    'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
    'bucket' => env('AWS_BUCKET_PRIVATE'),
],

's3_public' => [
    'driver' => 's3',
    'key' => env('AWS_ACCESS_KEY_ID'),
    'secret' => env('AWS_SECRET_ACCESS_KEY'),
    'region' => env('AWS_DEFAULT_REGION'),
    'url' => env('AWS_URL'),
    'endpoint' => env('AWS_ENDPOINT'),
    'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
    'bucket' => env('AWS_BUCKET_PUBLIC'),
],
Enter fullscreen mode Exit fullscreen mode

you will see that what we've done here is create two entries for two s3 buckets; one public and one private. this allows us to choose which bucket we want by the permissions it has. very handy. also note that we do not include any of our valuable data as defaults. it's much better for our software to error because we didn't configur our .env properly than to publish important data like our aws keys!

the upload endpoint

now that we have our configurations set up and have installed our dependencies, we're going to put together a fairly-straightforward endpoint that accepts an uploaded file and pushes it to our private s3 bucket.

first, will create a route.

Route::post('file', '\App\Http\Controllers\api\S3Controller@postFile');
Enter fullscreen mode Exit fullscreen mode

this route points to the method postFile in the controller at app/Http/Controllers/api/S3Controller.php, so we will create that controller file and then paste in this code.

<?php
namespace App\Http\Controllers\api;

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;
use App\Http\Controllers\Controller as Controller;

/**
 * A basic controller for endpoints accessing the private s3
 *
 */
class S3Controller extends Controller
{
    /**
     * Accept one uploaded file as 'file' and push to private s3
     */
    public function postFile(Request $request)
    {

        /**
         * The s3 configuration for our private bucket. See configs/filesystems.php
         */
        $s3config = 's3_private';

        /**
         * The directory path on the the s3 bucket where we want our file to be
         */
        $s3directory = "sometests/";

        /**
         * Confirm s3 configuration exists
         */
        if (!count(config("filesystems.disks.$s3config"))) {
            return response()->json("Configuration problem", 500);
        }

        /**
         * Validate file uploaded
         */
        $file = $request->file('file');
        if (!$file) {
            return response()->json("No file uploaded", 400);
        }

        /**
         * Push to the bucket
         */
        $filePathOnS3 = Storage::disk($s3config)->put($s3directory, $file);

        /**
         * Save this stuff to the database:
         * - $filePathOnS3
         * - $s3config
         * We will need this data to build a signed url later
         */

        /**
         * HTTP 201
         */
        return response()->json("Success", 201);
    } // postFile
}
Enter fullscreen mode Exit fullscreen mode

let's look at what's going on here.

at the top, around line 5, we include the Storage facade with use Illuminate\Support\Facades\Storage;. this is what we will use to access the s3 itself.

the first thing to note in the postFile() method are the two variables we set:

  • $s3config. this is the configuration block that we set in config.filesystems that describes our bucket. here we use s3_private because we are pushing to the private bucket.
  • $s3directory. this is the directory path on the s3 bucket where we want to put our file. note that this is the directory only. it does not include a file name. this value should end with a slash.

next, we do some basic testing to confirm that our configuration block for our bucket actually exists and to make sure that the calling user actually uploaded a file.

the actual push to the s3 is done with this:

$filePathOnS3 = Storage::disk($s3config)->put($s3directory, $file);
Enter fullscreen mode Exit fullscreen mode

this a fairly straightforward command. we get access to the s3 disk that is identified by our configuration block, and then we call the put method to send the file to the directory path on the bucket.

the thing to note there is that we do not specify the name of the file on the s3bucket; we let the put() command do that for us. this helps ensure that we are not overwriting existing files. if forty people all upload a file called driverslicense.png, that's a problem. better to let the storage facade handle naming for us.

the return from this command is the full path of the file on the s3. it will look something like

sometests/a9Tk76KsJ3LNjRS59FAT0xvBTGH4mAUy43hUBEKN.png
Enter fullscreen mode Exit fullscreen mode

we cannot access our file by using this path to build a url. that would defeat the point. this bucket is private.

however, we do need to store the path in our database as we will use it later to build a signed url to grant temporary access.

it is advisable to store both the path and the name of the s3 configuration in your database. both will be needed to build the signed url and, if you add a third or fourth (or twentieth) s3 configuration block later for more public and private buckets, you will be thankful that you kept track of your configuration blocks.

test it

now that have the file upload endpoint written, let's see it work by hitting with a curl

curl -v -F 'file=@/path/to/my/file.png' -H "Accept: application/json" http://api.example.ca/api/file
Enter fullscreen mode Exit fullscreen mode

obviously you will need to change the domain here to the one you are serving your api on, and modify the file=@ path to the path of the file you want to upload. once you've done that, run the curl and then go check your private s3 bucket for your file.

the signed url endpoint

now we can upload files and have them get pushed to our private s3 bucket. this is great, but for any of this to be actually useful, we are going to need to have some way to actually read those files.

amazon allows time-limited access to files on private buckets using presigned urls.

building these presigned urls manually is possible, but using the Storage facade is significantly easier. and we're all about the easy!

let's add this route

Route::get('file', '\App\Http\Controllers\api\S3Controller@getFile');
Enter fullscreen mode Exit fullscreen mode

and then paste this method into our S3Controller file:

public function getFile(Request $request)
{
    /**
     * Get our file info from the database.
     * This is hardcoded here as a demonstration, normally you would select your s3config and the
     * path for your image here.
     */
    $s3config = 's3_private';
    $path = "sometests/9VMpuTEZ0KMqfl4DTiPv8IprN6ImLXBx2DqoSGSp.png";

    /**
     * Confirm s3 configuration exists
     */
    if (!count(config("filesystems.disks.$s3config"))) {
        throw new \Exception("Unable to access storage bucket");
    }

    /**
     * How long your signed url is good for
     */
    $expiry = "+10 minutes";

    /**
     * Request signed url from aws
     */
    $s3 = Storage::disk($s3config);
    $client = $s3->getDriver()->getAdapter()->getClient();
    $command = $client->getCommand('GetObject', [
        'Bucket' => config("filesystems.disks.$s3config.bucket"),
        'Key'    => $path
    ]);

    $request = $client->createPresignedRequest($command, $expiry);
    $signedUrl = (String)$request->getUri();

    /**
     * HTTP 201
     */
    return response()->json($signedUrl, 201);
} // getFile
Enter fullscreen mode Exit fullscreen mode

the first thing this method does is get the s3 configuration data and the path of the file we want on the s3 bucket from the database. well, actually it's hardcoded here for convenience, so we will just have to use our imaginations. after that, we confirm that a configuration block for this s3 bucket actually exists in config/filesystems.php.

then we set our expiry value. this determines how long the signed url is good for. once a signed url expires, we are no longer able to use it to access the file and will be served an HTTP 403 instead. it is generally a good idea to keep expiry times short. if the administrator of the site you're building needs to re-access that driver's license scan or whatever, they can always resubmit for a new url.

we should also note the format here. the example is '+10 minutes', but you can use 'seconds' or 'hours' and any integer you want.

then it's on to actually requesting the url. this is a bit of a convoluted path, requiring us to get a driver, then an adapter, then a client, then a command before doing the actual submission. no one said aws was easy.

at last, there is getUri() which returns to us our presigned url. it will look something like this.

https://example-assets-private.s3.us-east-1.amazonaws.com/sometests/9VMptTEZ0KMqfl4DIjPv8IprN4ImLXBx2DqoSGSp.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAZABMB2CUU5OBLXW7%2F20211026%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211026T022032Z&X-Amz-SignedHeaders=host&X-Amz-Expires=600&X-Amz-Signature=8b8b7e63c2099d08c4992c16b429098ad074ca5c513dc65e3e5809c666fa2e8b
Enter fullscreen mode Exit fullscreen mode

looking at this, we can see that the query string contains our aws access key in X-Amz-Credential, amazon's idea of what the date is in X-Amz-Date, and the life of the url in seconds in X-Amz-Expires. that's all validated by the sha256 hash in X-Amz-Signature.

finally

private buckets and signed urls are not the easiest things to work with, but if your users are uploading files that are intended solely to be seen by themselves and site administration, they're worth the hassle.

of course, not every user file is a private file. user avatars, for instance, can live safely in a public bucket. that's why it is a good idea to have two buckets, public and private, with two configuration blocks in config/filesystems.php, allowing you to plan your file storage strategy to hit that sweet spot between speed, convenience and security.

Top comments (0)