DEV Community

Tiago Martinho
Tiago Martinho

Posted on

Creating a backend API for uploading files to Amazon S3 and Google Cloud Storage

Introduction

In modern web development, it is common for users to upload files to websites or applications. Whether it's a profile picture, a document, or any other type of file, handling file uploads is an essential part of many web applications. In a previous blogpost we saw how we could use SparkJava to define routes for GET and POST requests. Now we will explore how to parse files sent from the frontend in a POST request and how to store them in cloud providers like Amazon S3 and Google Cloud Platform.

Parsing Files with SparkJava

To handle file uploads in SparkJava, we need to first parse the incoming request to extract the uploaded file. SparkJava provides a convenient way to parse files using the Request object. We can access the uploaded file by calling the raw method on the request and then retrieving the file using its parameter name. Once we have the file, we can perform various operations on it, such as retrieving its name, size, or content.

Here's an example of how to parse a file from the request in SparkJava:

post("/upload", (request, response) -> {
    final var filePart = request.raw().getPart("file");
    // Perform operations on the file
    // e.g. retrieve its name, size, or content
    client.put(part.getSubmittedFileName(), part.getInputStream());
    return "File uploaded successfully";
});
Enter fullscreen mode Exit fullscreen mode

In the above code snippet, we define a POST endpoint at "/upload" that takes in a Request object and a Response object as parameters. Inside the post method, we get the input stream of the uploaded file part by calling getInputStream(). From there, we can perform any necessary operations on the file part like getSubmittedFileName().
The client object is an abstract class or interface that stores the file contents in the cloud provider of your choice. The actual implementation can be instantiated on your resource or controller depending on your application configuration.

Storage client interface

The following is an example of the client interface we described previously.

public interface StorageClient {

   void put(String filename, InputStream inputstream);
}
Enter fullscreen mode Exit fullscreen mode

For simplicity sake we are defining the return type of the method as void but there are advantages in defining some result type class that wraps the results returned by the cloud provider client methods. For example, you can get the uploaded file metadata. It also allows for easier unit testing.

Uploading Files to Amazon S3

After parsing the file in SparkJava, the next step is to upload it to a storage service. To upload files to S3, we need to create an S3 client and specify the bucket where we want to store the file. We can then use the client to upload the file by providing the bucket name, a unique key for the file, and the file's content.

Here's an example of how to upload a file to Amazon S3 using the AWS SDK for Java:

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
     .withRegion(Regions.EU_WEST_1)
     .withCredentials(
         new AWSStaticCredentialsProvider(
         new BasicAWSCredentials(
                 s3BucketConfig.accessKey, 
                 s3BucketConfig.secretKey)))
     .build();

s3Client.putObject(
    new PutObjectRequest(
        s3BucketConfig.bucketName,
        filename, 
        inputstream));
Enter fullscreen mode Exit fullscreen mode

In the above code snippet, we first create an instance of the Amazon S3 client using the AmazonS3ClientBuilder class. We specify the desired region for the S3 bucket using the withRegion method and instantiate the credentials using the necessary keys from our config. Then, we provide the bucket name from our s3BucketConfig, the filename and the file input stream itself as parameters to the putObject method of the S3 client. This will upload the file to the specified bucket in Amazon S3.
The S3 client can be wrapped in your own client and implement the StorageClient interface we defined previously.

Uploading Files to Google Cloud Storage

A similar process is used to store the file in Google Cloud storage as can be seen below:

final var storage = StorageOptions.newBuilder()
    .setCredentials(cloudStorageConfig.jwtPath)
    .setProjectId(cloudStorageConfig.projectId)
    .build()
    .getService();

final var blob = newBuilder(
    BlobId.of(cloudStorageConfig.bucket, filename)).build();

storage.create(blob, inputStream.readAllBytes());
Enter fullscreen mode Exit fullscreen mode

First we instantiate a Storage object from our cloudStorageConfig which contains the path to our service account key file jwtPath and our google cloud projectId.
Then we create a Blob object using the BlobInfo.newBuilder method. We just need to provide a BlobId which we can instantiate from the bucket where we want to store our files and the filename itself.
Finally, the Storage.create method will upload our file to the cloud storage bucket of our choice.
Like the S3 client, the Google Storage client can be wrapped and implement the StorageClient interface.

Conclusion

Handling file uploads is a crucial aspect of web development, and with SparkJava and Amazon S3 or Google Cloud Storage, it becomes a seamless process. By parsing files with SparkJava and uploading them to the cloud provider of our choice we can securely store files in the cloud and access them whenever needed. This combination of technologies provides a robust solution for managing file uploads in web applications.

So, the next time you need to implement file uploading in your web application, consider using SparkJava and Amazon S3 or Google Cloud Storage for a reliable and scalable solution. By following the examples outlined in this blog post, you'll be able to handle file uploads seamlessly and provide a great user experience.

Top comments (0)