<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Akshay Goyal</title>
    <description>The latest articles on DEV Community by Akshay Goyal (@akshaygoyal174).</description>
    <link>https://dev.to/akshaygoyal174</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/akshaygoyal174"/>
    <language>en</language>
    <item>
      <title>Increasing VirtualBox VDI storage capacity on Mac OS X</title>
      <dc:creator>Akshay Goyal</dc:creator>
      <pubDate>Wed, 21 Mar 2018 15:57:49 +0000</pubDate>
      <link>https://dev.to/akshaygoyal174/increasing-virtualbox-vdi-storage-capacity-on-mac-os-x-36k</link>
      <guid>https://dev.to/akshaygoyal174/increasing-virtualbox-vdi-storage-capacity-on-mac-os-x-36k</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://medium.com/@akshay.g174_68301/increasing-virtualbox-vdi-capacity-on-mac-os-x-ac4d52b63f0c" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2Abn4isgq4L6kLoIBEn0kqkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2Abn4isgq4L6kLoIBEn0kqkg.png" title="Resizing you Virtual Box storage is as simple as resizing this image, not really" alt="Resizing you Virtual Box storage is as simple as resizing this image, not really&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use a VirtualBox setup on my Mac for running Ubuntu as a Guest OS. This setup helps me mock the production environment on my local dev machine. VirtualBox is a pretty handy software for anyone trying to run a guest OS. Though there is an initial learning curve while setting it up, but it’s fun. I have been using this setup for over a year now and it has served my well. Well, well until today…&lt;/p&gt;

&lt;p&gt;Today, while working on a legendary(read as legacy) piece of software which was no cakewalk navigating through, required me to download the latest DB dumps. So, like any good programmer I tried to download the dumps, only to realise that I couldn’t do so because… my VM was out of space. I suddenly remembered the time when I allocated 20Gigs of space to the VDI, thinking that I’d never need more space than that. Haha.. was I wrong!!. Following the lazy dev protocol, I tried quickest solution; cleared up the logs and removed most unwanted files, but to no avail. Finally the time had come for me to get my hands dirty. A few search results later…&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to follow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Take backup of your VDI&lt;br&gt;
(&lt;a href="https://forums.virtualbox.org/viewtopic.php?f=2&amp;amp;t=58803" rel="noopener noreferrer"&gt;https://forums.virtualbox.org/viewtopic.php?f=2&amp;amp;t=58803&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fetch path to VDI&lt;br&gt;
&lt;em&gt;Save this path, we’ll use it in the next steps&lt;/em&gt;&lt;br&gt;
Right click on VM -&amp;gt; Click Settings-&amp;gt; Storage -&amp;gt; Select your VDI on the Storage Tree on the left -&amp;gt; Location would have the path to your .vdi&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2Aigq15rMNn2aI61R5B903CA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2Aigq15rMNn2aI61R5B903CA.png" title="Path to the VDI is highlighted" alt="Path to the VDI is highlighted&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shutdown the VM and quit VirtualBox&lt;/li&gt;
&lt;li&gt;Navigate to VBoxManage utility folder
Open the Terminal app and type the following command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /Applications/VirtualBox.app/Contents/Resources/VirtualBoxVM.app/Contents/MacOS/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Modify the size allocated to the VDI
Use the VDI path from above and use it in the below command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VBoxManage modifyhd — resize [new size in MB] [/path/to/vdi]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2AOfT1MxmR0MwJE9oYKufVZQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2AOfT1MxmR0MwJE9oYKufVZQ.png" title="Is it over yet?" alt="Is it over yet?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Notice the ‘\’ before the space. This is used to escape the space character in path.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify if the desired changes were successful
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VBoxManage showhdinfo [/path/to/vdi]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2A9hcwL4UFmM4FensdSyG7gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2A9hcwL4UFmM4FensdSyG7gw.png" title="Yay!!" alt="Yay!!"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch the VirtualBox and VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; You may need to reallocate the partition to use the new space.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I am using VirtualBox version 5.1.14. You may want to use modifymedium instead of modifyhd if you are on a newer version, though modifyhd is still supported.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A few search results later…&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;… I was happily downloading the DB dump on my VM and completing this blog post in the meanwhile. :)&lt;/p&gt;

&lt;p&gt;If you like this post, feel free to follow me on &lt;a href="https://twitter.com/akshaygoyal174" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;. You can also visit my &lt;a href="https://www.github.com/ninetyone" rel="noopener noreferrer"&gt;Github&lt;/a&gt; page to checkout some interesting projects, and contribute to them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[&lt;a href="https://www.virtualbox.org/manual/ch08.html" rel="noopener noreferrer"&gt;https://www.virtualbox.org/manual/ch08.html&lt;/a&gt;]&lt;/li&gt;
&lt;li&gt;[&lt;a href="http://osxdaily.com/2015/04/07/how-to-resize-a-virtualbox-vdi-or-vhd-file-on-mac-os-x/" rel="noopener noreferrer"&gt;http://osxdaily.com/2015/04/07/how-to-resize-a-virtualbox-vdi-or-vhd-file-on-mac-os-x/&lt;/a&gt;]&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>virtualbox</category>
      <category>virtualization</category>
      <category>storage</category>
      <category>oracle</category>
    </item>
    <item>
      <title>Simplifying Browser File Uploads, By Dodging The Middleman</title>
      <dc:creator>Akshay Goyal</dc:creator>
      <pubDate>Tue, 06 Feb 2018 06:08:56 +0000</pubDate>
      <link>https://dev.to/akshaygoyal174/simplifying-browser-file-uploads-by-dodging-the-middleman-3jhl</link>
      <guid>https://dev.to/akshaygoyal174/simplifying-browser-file-uploads-by-dodging-the-middleman-3jhl</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://medium.com/@akshay.g174_68301/simplifying-browser-uploads-by-bypassing-the-middleman-b7fdbab63d67" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; This I my first blog post :). After years of procrastination, I have finally managed to sit dedicatedly and write an article. If you find that the write up is not good enough, then please let me know what I can improve on, for future articles. Thanks, Enjoy!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2A3j7oRXpqinBlyydcf7U6-A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2A3j7oRXpqinBlyydcf7U6-A.png" title="Hassle Free Life" alt="Hassle Free Life"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I currently work for a media company focussing on sports news and entertainment. The majority of our readable content is crowdsourced. Our content creators get paid based on what revenue their article generates. It is a very transparent system and one that scales well. This growing scale is good for business but brings in a lot of challenges for us at the engineering team. We are therefore always focussed on building systems, which can handle large volumes of traffic without any downtime to our users and content creators.&lt;/p&gt;

&lt;p&gt;Today, a significant percentage of content consumption has moved from textual medium to videos. Keeping up with changing times, we plan on extending out content creation platform to handle video uploads as well. Well… video is a different ball game altogether. Owing to their huge file sizes, they need to be stored and managed separately.&lt;/p&gt;

&lt;p&gt;While brainstorming on the architecture of the system. We came up with a couple of approaches:&lt;/p&gt;

&lt;h2&gt;
  
  
  Approach 1:
&lt;/h2&gt;

&lt;p&gt;User uploads the video to our server, and then we upload it to s3 and use that s3 bucket url to send it for encoding further.&lt;br&gt;
Well, this might sound simple enough, but this approach has an unnecessary overhead which could potentially lead to a major catastrophe(for us atleast). How you ask.. let me tell you.. one of our micro service, serves client facing request as well as content creation requests. Now in this architecture, it is possible that a lot of video upload requests choke up the server (by hogging up CPU or server bandwidth), and any further requests are timed out.&lt;/p&gt;

&lt;p&gt;But what if I have an autoscaled environment?&lt;br&gt;
For those who don’t know what autoscaling is, let me give you a overview. If your server can handle 1000 requests/second and you start getting 2000 requests/second, then your server will not be able to serve those requests and they’ll time out for the user. In order to solve this issue, we can run 2 parallel servers and distribute traffic on them using a load balancer. A reasonable question here would be, then why don’t always we run 2 parallel server? Well, we could but what if the number of requests grows upto 5000/sec then we’ll need more than 2 servers. Also, there is a cost factor associated with each server that is running. The ideal solution is to spin up new servers only when there is an actual need for an additional server. AWS provides such a autoscaling solution out of the box. You can specify a single or multiple trigger conditions (eg: CPU over 80% and/or RAM usage 90%)which, when met will autoscale your service.&lt;/p&gt;

&lt;p&gt;Now that we know autoscaling basics, let’s consider a probable scenario in our case. Suppose there are a lot of incoming requests to the server from both content creators (unloading videos) and end users. Serving both the requests is critical for business. Now with autoscaling in place, an additional server will be automatically added when the trigger condition are met. Once the new server(or node) is up and running, it’ll starts receiving and serving requests. With video requests incoming on both servers,Now both the nodes receive video uploads and store them locally on their respective HDD. Ideally, both the servers will keep uploading to s3. Work fine, right? Well, not really.. Now if the traffic were to subside a bit and one server was sufficient to serve the requests, then one server will be killed(while downscaling). When this happens we can’t be a 100% sure if the video on that node’s filesystem was upload to S3 yet.&lt;/p&gt;
&lt;h2&gt;
  
  
  Approach2:
&lt;/h2&gt;

&lt;p&gt;In order to overcome the aforementioned potential issues, we thought of skipping the middle man and uploading videos directly to s3 from the clients’ browser. With this approach, we don’t have to worry about the requests choking up the server and users reporting issues at 3o’clock in the night. Since we are moving everything to client side, there is another challenge of security. We don’t want our private s3 info to be exposed to the public. So the challenge is to upload files securely to s3 directly from the browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Setup&lt;/strong&gt;&lt;br&gt;
Create a new bucket &lt;br&gt;
A new bucket is preferred, so that you can restrict users access to other buckets&lt;br&gt;
Create a new user with minimum privileges&lt;br&gt;
Create a new user who can write to your new bucket. Save these accessKey and secretKey, we’ll be using it later. Below upload policy restricts what a user can do even if he has your accessKey and secretKey. Therefore it’s better to create a new user and not use the root user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Version&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2012-10-17&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Statement&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Effect&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Allow&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Action&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;s3:PutObject&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;s3:PutObjectAcl&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:s3:::YOUR_BUCKET_ID_HERE/*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
            &lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Handle CORS request&lt;/strong&gt;&lt;br&gt;
Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to let a user agent gain permission to access selected resources from a server on a different origin (domain) than the site currently in use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;&lt;span class="nx"&gt;xml&lt;/span&gt; &lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;encoding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;UTF-8&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;CORSConfiguration&lt;/span&gt; &lt;span class="nx"&gt;xmlns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://s3.amazonaws.com/doc/2006-03-01/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;CORSRule&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;AllowedMethod&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;POST&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/AllowedMethod&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;AllowedOrigin&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;yourdomain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;com&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/AllowedOrigin&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;AllowedHeader&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;*&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/AllowedHeader&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/CORSRule&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/CORSConfiguration&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How the upload works
&lt;/h2&gt;

&lt;p&gt;In HTTP terms, the upload is a simple POST request to an S3 endpoint. The request contains the file, a filename (key, in S3 terms), some metadata, a signed policy and a signature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Order of flow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Client sends a HTTP GET request with filename and content type to your server. The actual file is not sent in this request.&lt;/li&gt;
&lt;li&gt;Server responds with fields which the client will use to send request to s3. Based on your requirements, here you could restrict who can upload files to s3 or the number of upload a person can do in say 24 hours.&lt;/li&gt;
&lt;li&gt;Client consumes the server response and sends a HTTP POST request to s3 bucket. This request contains the actual file along with the various params contained in the request body.&lt;/li&gt;
&lt;li&gt;The request is authenticated at AWS. If successful, the file is uploaded, if not an error is returned.
&lt;strong&gt;Note:&lt;/strong&gt; If your file size is pretty large (&amp;gt;10MB), you can chunk the file and repeat the whole process for each chunk. When all chunks have been uploaded to S3, you can send a merge request to S3. This merge request tell AWS that all parts of the file have been received and the file can now be merged.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Sample server response&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;endpoint_url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://${your_bucket_name}.com.s3.amazonaws.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;params&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;key&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;e26413ddf89eadcde3040f73d0c4e3f4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;acl&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public-read&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;success_action_status&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;201&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;policy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;eyJleHBpcmF0aW9uIjoiMjAxOC0wMi0wNFQ4OjE3OjAwWiIsImNvbmRpdGlvbnMiOlt7ImJ1Y2tldCI6InRlc3Qtdi5zcG9ydHNrZWVkYS5jb20ifSx7ImtleSI6ImUyNjQxM2RkZjg5ZWFkY2QlMzA0MGY3M2QwYzRlM2Y0LnBuZyJ9LHsiYWNsIjoicHVibGljLXJlYWQifSx7InN1Y2Nlc3NfYWN0aW9YXR1cyI6IjIwMSJ9PFsiY29udGVudC1sZW5ndGgtcmFuZ2UiLDBsMTA0ODU3NjBdLHsieC1hbXotYWxnb3JpdGhtIjoiQVdTNC1ITUFDLVNIQTI1NiJ9LHsieC1hbXotY3JlZGVudGlhbCI6IkFLSUFKR1RBMjVHSlJVVk4yNlJBXC8yMDE4MDIwNFwvYXAtc291dGhlYXN0LTFcL3MzXC9hd3M0X3JlcXVlc3QifSx7IngtYW16LWRhdGUiOiIyMDE4MDIwNFQwMDAwMDBaIn0sWyJzdGFydHMtd2l0aCIsIiRDb250ZW50LVR5cGUiLCIiXV10&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;content-type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;image/png&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x-amz-algorithm&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AWS4-HMAC-SHA256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x-amz-credential&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;${your_access_key_id}/20180204/${your_s3_bucket_region}/s3/aws4_request&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x-amz-date&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;20180204T000000Z&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x-amz-signature&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;3787d8398c33548253ed102a8aas794cf2ebbfafad36251d46489513ef513d35&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How is AWS request authenticated
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;On Client side&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2AtRuHjWXphDVm3P_dB4_CUw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2AtRuHjWXphDVm3P_dB4_CUw.png" title="Client side" alt="Client Side"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On Client side. Source: Link&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Construct a request to AWS.&lt;/li&gt;
&lt;li&gt;Calculate the signature using your secret access key.&lt;/li&gt;
&lt;li&gt;Send the request to Amazon S3. Include your access key ID and the signature in your request. Amazon S3 performs the next three steps.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;On AWS Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2AfubNyDwu3D9mzBbgE7s4Vw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2AfubNyDwu3D9mzBbgE7s4Vw.png" title="Server side" alt="Server Side"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On AWS Server. Source: Link&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon S3 uses the access key ID to look up your secret access key.&lt;/li&gt;
&lt;li&gt;Amazon S3 calculates a signature from the request data and the secret access key using the same algorithm that you used to calculate the signature you sent in the request.&lt;/li&gt;
&lt;li&gt;If the signature generated by Amazon S3 matches the one you sent in the request, the request is considered authentic. If the comparison fails, the request is discarded, and Amazon S3 returns an error response.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Show me the code
&lt;/h2&gt;

&lt;p&gt;You can find working sample for server and client at:&lt;br&gt;
&lt;a href="https://www.github.com/ninetyone/secureBrowserUploadsToS3" rel="noopener noreferrer"&gt;https://www.github.com/ninetyone/secureBrowserUploadsToS3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you like this post, you can follow me on &lt;a href="https://twitter.com/akshaygoyal174" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html&lt;/a&gt; (Useful while debugging)&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/S3_Authentication2.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonS3/latest/dev/S3_Authentication2.html&lt;/a&gt;(Understanding AWS request authentication)&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>browser</category>
      <category>upload</category>
    </item>
  </channel>
</rss>
