Have you ever experienced an “file too large” error when uploading a file? As our presentations, PDFs, files and videos get larger and larger, we are stretching remote servers’ ability to accept our files. With just a few lines of JavaScript, we can ensure that this error goes away for our customers, no matter what they are trying to upload. Keep reading to learn more.
The most common error with large uploads is the server response: HTTP 413: Request Entity too Large. The server is configured to only accept files to a certain size, and will reject any larger file. You can resolve this by editing your server settings to allow for large uploads, but sometimes this is not possible for security or other reasons. (If the server limit gets raised to 2GB for videos, imagine the images that might end up getting uploaded!)
Further, if a large file fails during upload, you may have to start the upload all over again. How many times have you gotten an “upload failed” at 95% complete. Utterly frustrating!
Segments/Chunks
Video streaming breaks up large videos into smaller segments, and then plays them back in the correct playback order. What if we could do the same with our large file uploads? Break the large file into smaller segments and upload each one separately? We can, and do it in a way that is seamless to our users!
Baked into JavaScript is the File API and the Blob API, with full support across the browser:
This API lets us accept a large file from our customer, and use the browser locally to break it up into smaller segments, with our customers being none the wiser!
Let’s walk through how you might use this to upload a large video to api.video. To follow along, the code is all live on Github, so feel free to clone the repo and run it locally.
To build your own uploader like this, you’ll need a free api.video account. Use this to create a delegated upload token. It takes just 3 steps to create using cURL and a terminal window.
Now that you're back, we'll begin the process of uploading large files.
Markup
The HTML for our page is basic (we could pretty it up with CSS, but it's a demo 😛):
Add a video here:
<br>
<input type="file" id="video-url-example">
<br>
<br>
<div id="video-information" style="width: 50%"></div>
<div id="chunk-information" style="width: 50%"></div>
There is an input field for a video file, and then there are 2 divs where we will output information as the video uploads.
Next on the page is the script section - and here's where the heavy lifting will occur.
<script>
const input = document.querySelector('#video-url-example');
const url ="https://sandbox.api.video/upload?token=to1R5LOYV0091XN3GQva27OS";
var chunkCounter;
//break into 1 MB chunks for demo purposes
const chunkSize = 1000000;
var videoId = "";
var playerUrl = "";
We begin by creating some JavaScript variables:
- input: the file input interface specified in the HTML.
- url: the delegated upload url to api.video. The token in the code above (and on Github) points to a sandbox instance, so videos will be watermarked and removed automatically after 72 hours. If you've created a delegated token, replace the url parameter with your token.
- chunkCounter: Number of chunks that will be created.
- chunkSize: each chunk will be 1,000,000 bytes - not exactly 1MB, but close enough for testing. For production, we can increase this to 100MB or similar.
- videoId: the delegated upload will assign a videoId on the api.video service. This is used on subsequent uploads to identify the segments, ensuring that the video is identified properly for reassembly at the server.
- playerUrl: Upon successful upload, this will output the playback url for the api.video player.
Next, we create an EventListener on the input - when a file is added, split up the file and begin the upload process:
input.addEventListener('change', () => {
const file = input.files[0];
var numberofChunks = Math.ceil(file.size/chunkSize);
document.getElementById("video-information").innerHTML = "There will be " + numberofChunks + " chunks uploaded."
var start =0;
var chunkEnd = start + chunkSize;
//upload the first chunk to get the videoId
createChunk(videoId, start);
We name the file uploaded as 'file'. To determine the number of chunks to upload, we divide the file size by the chunk size. We round the number round up, as a fraction of a chunk is still a chunk - just not a full size one. This is then written onto the page for the user to see. (In a real product, your users probably do not care about this, but for a demo, it is fun to see).
Slicing up the file
Next, we begin to break the file into chunks. Since the file is zero indexed, you might think that the last byte of the chunk we create should be
chunkSize -1
, and you would be correct. However, we do not subtract one from the chunkSize. The reason why is found in a careful reading of the Blog.slice specification. This page tells us that the end parameter is:
the first byte that will not be included in the new Blob (i.e. the byte exactly at this index is not included).
So, we must use chunkSize, as it will be the first byte NOT included in the new Blob.
Now we call the function createChunk.
function createChunk(videoId, start, end){
chunkCounter++;
console.log("created chunk: ", chunkCounter);
chunkEnd = Math.min(start + chunkSize , file.size );
const chunk = file.slice(start, chunkEnd);
console.log("i created a chunk of video" + start + "-" + chunkEnd + "minus 1 ");
const chunkForm = new FormData();
if(videoId.length >0){
//we have a videoId
chunkForm.append('videoId', videoId);
console.log("added videoId");
}
chunkForm.append('file', chunk);
console.log("added file");
//created the chunk, now upload iit
uploadChunk(chunkForm, start, chunkEnd);
}
In the createChunk function, we determine which chunk we are uploading by incrementing the chunkCounter, and again calculate the end of the chunk (recall that the last chunk will be smaller than chunkSize, and only needs to go to the end of the file).
The actual slice command
The file.slice creates the video 'chunk' for upload. We've begun the process of cutting up the file!
We then create a form to upload the video segment to the API. After the first segment is uploaded, the API returns a videoId that must be included in subsequent segments (so that the backend knows which video to add the segment to). On the first upload, the videoId has length zero, so this is ignored. We add the chunk to the form, and then call the uploadChunk function to send this file to api.video. On subsequent uploads, the form will have both the videoId and the video segment.
Uploading the chunk
Let's walk through the uploadChunk function:
function uploadChunk(chunkForm, start, chunkEnd){
var oReq = new XMLHttpRequest();
oReq.upload.addEventListener("progress", updateProgress);
oReq.open("POST", url, true);
var blobEnd = chunkEnd-1;
var contentRange = "bytes "+ start+"-"+ blobEnd+"/"+file.size;
oReq.setRequestHeader("Content-Range",contentRange);
console.log("Content-Range", contentRange);
We kick off the upload by creating a XMLHttpRequest to handle the upload. We add a listener so we can track the upload progress.
adding a byterange header
We add a header to this request with the byterange of the chunk being uploaded.
Note that in this case, the end of the byterange should be the last byte of the segment, so this value is one byte smaller than the slice command we used to create the chunk.
The header will look something like this:
Content-Range: bytes 0-999999/4582884
upload progress updates
While the video chunk is uploading, we can update the upload progress on the page, so our user knows that everything is working properly. We created the progress listener at the beginning of the uploadChunk function. Now we can define what it does:
function updateProgress (oEvent) {
if (oEvent.lengthComputable) {
var percentComplete = Math.round(oEvent.loaded / oEvent.total * 100);
var totalPercentComplete = Math.round((chunkCounter -1)/numberofChunks*100 +percentComplete/numberofChunks);
document.getElementById("chunk-information").innerHTML = "Chunk # " + chunkCounter + " is " + percentComplete + "% uploaded. Total uploaded: " + totalPercentComplete +"%";
// console.log (percentComplete);
// ...
} else {
console.log ("not computable");
// Unable to compute progress information since the total size is unknown
}
}
First, we do a little bit of math to compute the progress. For each chunk we can calculate the percentage uploaded (percentComplete). Again, a fun value for the demo, but not useful for real users.
What our users want is the totalPercentComplete, a sum of the existing chunks uploaded, but the amount currently being uploaded.
For the sake of this demo, all of these values are written to the 'chunk-information' div on the page.
Chunk upload complete
Once a chunk is fully uploaded, we run the following code (in the onload event).
oReq.onload = function (oEvent) {
// Uploaded.
console.log("uploaded chunk" );
console.log("oReq.response", oReq.response);
var resp = JSON.parse(oReq.response)
videoId = resp.videoId;
//playerUrl = resp.assets.player;
console.log("videoId",videoId);
//now we have the video ID - loop through and add the remaining chunks
//we start one chunk in, as we have uploaded the first one.
//next chunk starts at + chunkSize from start
start += chunkSize;
//if start is smaller than file size - we have more to still upload
if(start<file.size){
//create the new chunk
createChunk(videoId, start);
}
else{
//the video is fully uploaded. there will now be a url in the response
playerUrl = resp.assets.player;
console.log("all uploaded! Watch here: ",playerUrl ) ;
document.getElementById("video-information").innerHTML = "all uploaded! Watch the video <a href=\'" + playerUrl +"\' target=\'_blank\'>here</a>" ;
}
};
oReq.send(chunkForm);
When the file segment is uploaded, the API returns a JSON response with the VideoId. We add this to the videoId variable, so it can be included in subsequent uploads.
To upload the next chunk, we increment the bytrange start variable by the chunkSize. If we have not reached the end of the file, we call the createChunk function with the videoId and the start. This will recursively upload each subsequent slice of the large file, continuing until we reach the end of the file.
upload complete
When start > file.size, we know that the file has been completely uploaded to the server. Now, the server has begun the process to reassemble the file.
When the last segment is uploaded, the api.video response contains the full video response (similar to the get video endpoint). This response includes the player url that is used to watch the video. We add this value to the playerUrl variable, and add a link on the page so that the user can see their video. And with that, we've done it!
Conclusion
In this post, we use a form to accept a large file from our user. We then use the file.slice API in the users' browser to break up the file locally. We can subsequently upload each segment until the entire file has been completely uploaded to the server. All of this is done without any work from the end user. No more "file too large" error messages, improving the customer experience by abstracting a complex problem with an invisible solution!
Are you using the File and Blob APIs in your upload service? Let us know how! If this has helped you, leave a comment in our community forum.
Top comments (0)