Introduction
In the world of web development, handling file uploads efficiently and securely is a common challenge. Whether you're building a social media platform, an e-commerce site, or any application that requires user-generated content, you'll likely need to implement a robust file upload feature. Node.js, with its non-blocking I/O and event-driven architecture, offers a solid foundation for building such features. However, when it comes to actually storing these files, external storage solutions like Digital Ocean Spaces become invaluable due to their scalability, reliability, and cost-effectiveness.
Digital Ocean Spaces is an alternative to AWS S3 Bucket and follows the same format as the latter to promote data migration if needed. But the documentation might be vague or not beneficial. Well for a project I was working on I wrote a middleware for uploading the files from a frontend to the Digital Ocean Spaces using Node JS.
This is going to be a short tutorial with more commands and code and less explanation.
Installing the packages
I would be using yarn but you can use any package manager
yarn add multer multer-s3 @aws-sdk/client-s3
This installs our 3 main packages for writing the middleware.
Now for the main code of the middleware.
This goes out in a few steps.
Step 1 - Defining the S3 Client
import { S3Client } from '@aws-sdk/client-s3';
const s3Client = new S3Client({
region: process.env.SPACES_REGION || '',
credentials: {
accessKeyId: process.env.SPACES_KEY || '',
secretAccessKey: process.env.SPACES_SECRET || '',
},
endpoint: process.env.SPACES_ENDPOINT,
forcePathStyle: false,
});
Here SPACES_KEY and SPACES_SECRET is the API key you generated in Digital Ocean for accessing your spaces. It is recommended to store it as an environment variable.
This can be exported an used anywhere else as well since we just need to initialise S3 client once throughout our code base.
Step 2 - Writing the file upload code
const fileUpload = (req, res, next) => {
try {
const uploadFile = multer({
storage: multerS3({
s3: s3Client,
bucket: process.env.SPACES_BUCKET || '', //BUCKET NAME
acl: 'public-read',
metadata: (req, file, cb) => {
cb(null, { fieldname: file.fieldname });
},
key: (req, file, cb) => {
const fileName =
'documents/' + // I AM STORING IT IN /documents folder, you can designate any other directory or just / for storing in root directory
uuid() +
'_' +
file.fieldname +
'_' +
file.originalname;
cb(null, fileName);
},
}),
limits: {
fileSize: 1024 * 1024 * 5, // 5 MB file uploads allowed. Increase according to your usage.
},
}).single('document');
uploadFile(req, res, err => {
if (err) {
logger.error(err);
return res.status(500).send({ message: 'File upload failed' });
}
next(); // move to next function middleware or function when upload complete
});
} catch (error) {
logger.error(error);
}
};
That's it. Now this function can be exported and used as a middleware like
documentRouter.post('/upload-files', fileUpload, async (req, res) =>
{
//main code
});
This is taking the assumption that the files are being uploaded from using formData() from the frontend with the file being in the field like
This is example frontend code
const formData = new FormData();
formData.append("document", file); // assuming file is stored in a file variable with the appropriate data type
Top comments (0)