<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Taufiq Abdullah</title>
    <description>The latest articles on DEV Community by Taufiq Abdullah (@taufiqtab).</description>
    <link>https://dev.to/taufiqtab</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/taufiqtab"/>
    <language>en</language>
    <item>
      <title>When Disk Space is Not Enough: A Lesson in Challenging MySQL Database Cleansing</title>
      <dc:creator>Taufiq Abdullah</dc:creator>
      <pubDate>Wed, 25 Feb 2026 21:31:11 +0000</pubDate>
      <link>https://dev.to/taufiqtab/when-disk-space-is-not-enough-a-lesson-in-challenging-mysql-database-cleansing-eoj</link>
      <guid>https://dev.to/taufiqtab/when-disk-space-is-not-enough-a-lesson-in-challenging-mysql-database-cleansing-eoj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbkbm1kg9ko6zlcviw85.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbkbm1kg9ko6zlcviw85.webp" alt=" " width="640" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Database maintenance is rarely a walk in the park, especially when dealing with high-volume transactional data. Recently, We was tasked with cleansing a MySQL database that had ballooned to &lt;strong&gt;220GB&lt;/strong&gt;. The mission was simple: remove data from 2018 to 2023 across four tables to reclaim space and improve performance.&lt;/p&gt;

&lt;p&gt;However, as any Dev knows, "simple" tasks often hide complex traps. Here is how we navigated storage limits and technical hurdles to get the job done.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Challenge: The 200GB "Elephant" in the Room
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe0h2c5yz7u1fn4zqthx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe0h2c5yz7u1fn4zqthx.jpg" alt=" " width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Out of the 220GB total size, one single table accounted for &lt;strong&gt;200GB&lt;/strong&gt;. This table was heavy with transactional records and—the real culprit—&lt;strong&gt;BLOB attachments&lt;/strong&gt;. Our goal was to keep only the data from 2024 onwards.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: The "Clone and Swap" Strategy (The Ideal Plan)
&lt;/h3&gt;

&lt;p&gt;Our first approach was a classic "blue-green" table migration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clone:&lt;/strong&gt; Create a clone of the existing tables with a new name.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filter:&lt;/strong&gt; Insert only the 2024–present data into the new tables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Swap:&lt;/strong&gt; Rename the existing tables and swap them with the new ones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cleanup:&lt;/strong&gt; Drop the old tables once verified.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Pros:&lt;/strong&gt; It’s safe. If something goes wrong, the original data remains untouched as a "checkpoint."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Result in Dev:&lt;/strong&gt; Flawless. Despite the time required for backup and restore, it worked exactly as expected.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Production Reality Check
&lt;/h3&gt;

&lt;p&gt;When we moved to production, we hit a brick wall. The production server had a &lt;strong&gt;500GB total capacity&lt;/strong&gt;, but only &lt;strong&gt;100GB of free space&lt;/strong&gt; remained. Trying to clone a 200GB table into a 100GB hole is mathematically impossible. We needed a Plan B.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 2: The "Chunked Delete" Strategy (The Safe Bet?)
&lt;/h2&gt;

&lt;p&gt;We pivoted to a more granular approach: deleting data directly from the existing tables. To prevent long-running queries or locking the database, we followed these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deleted data in year-by-year ranges (2018, 2019, etc.).&lt;/li&gt;
&lt;li&gt;Used &lt;strong&gt;Bash scripts&lt;/strong&gt; to automate chunked deletions with &lt;code&gt;LIMIT&lt;/code&gt; clauses to keep the system responsive and avoid "hanging" the DB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7m13ik6cfq20j3hn5h3.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7m13ik6cfq20j3hn5h3.jpeg" alt=" " width="792" height="966"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62w0l4et4kzsd63hv1fi.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62w0l4et4kzsd63hv1fi.jpeg" alt=" " width="792" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Trap: The &lt;code&gt;OPTIMIZE TABLE&lt;/code&gt; Paradox
&lt;/h3&gt;

&lt;p&gt;The deletion worked, but MySQL doesn't automatically shrink the file size on disk after a &lt;code&gt;DELETE&lt;/code&gt; (due to the "High Water Mark" and data fragmentation). To actually reclaim the 100GB+ of space, we needed to run &lt;code&gt;OPTIMIZE TABLE&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The Critical Oversight:&lt;/strong&gt; &lt;code&gt;OPTIMIZE TABLE&lt;/code&gt; works by creating a temporary copy of the table. Even though we had deleted half the data, the operation still required enough free space to rewrite the entire table. Once again, our 100GB of free space was the bottleneck. We were stuck.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Phase 3: The "Off-Site Surgery" (The Final Solution)
&lt;/h2&gt;

&lt;p&gt;Desperate times call for creative architecture. We decided to perform the "heavy lifting" on a separate environment.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Export:&lt;/strong&gt; We dumped the massive 200GB table and moved it to a secondary server with enough storage.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Cleanse &amp;amp; Shrink:&lt;/strong&gt; On the secondary server, we performed the deletions and ran &lt;code&gt;OPTIMIZE TABLE&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Result:&lt;/strong&gt; The optimized table dropped significantly below 100GB.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Import:&lt;/strong&gt; We transferred the now-compacted table back to the production server under a new name.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Final Swap:&lt;/strong&gt; Since the new table was now small enough to fit in the 100GB free space, we successfully swapped it with the original and dropped the old 200GB giant.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Key Takeaways for us
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free Space is Your Best Friend:&lt;/strong&gt; Always check your &lt;code&gt;df -h&lt;/code&gt; before planning a migration. Most MySQL maintenance operations (like &lt;code&gt;OPTIMIZE&lt;/code&gt; or &lt;code&gt;ALTER&lt;/code&gt;) require at least &lt;strong&gt;2x the size&lt;/strong&gt; of your largest table in free space.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize with Caution:&lt;/strong&gt; Remember that &lt;code&gt;OPTIMIZE TABLE&lt;/code&gt; is not an "in-place" fix; it is a rebuild.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Think Outside the Box:&lt;/strong&gt; If your local disk is full, use a secondary staging server to process the data before bringing it back home.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Database cleansing is as much about managing infrastructure as it is about writing SQL. Have you ever faced a storage "deadlock" like this? I’d love to hear how you handled it in the comments!&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>'No More Space' and Can't connect SSH to the Server</title>
      <dc:creator>Taufiq Abdullah</dc:creator>
      <pubDate>Tue, 01 Mar 2022 05:07:06 +0000</pubDate>
      <link>https://dev.to/taufiqtab/no-more-space-and-cant-connect-ssh-to-the-server-56d1</link>
      <guid>https://dev.to/taufiqtab/no-more-space-and-cant-connect-ssh-to-the-server-56d1</guid>
      <description>&lt;p&gt;Hello, I want to share my Experiment of this &lt;em&gt;'Little Nightmare'&lt;/em&gt; about insufficient Storage Space in running live Server, in this case I use Google Cloud Services : Compute Engine and Ubuntu as the Operating System.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note :&lt;br&gt;
this post contain some Story, Trial &amp;amp; Error, Skip to the Solution Section if you want to see the Answer.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I've got message from my friend, he sent me this Error Message :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyju4kfqp4j8mdiz8qzr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyju4kfqp4j8mdiz8qzr.jpg" alt=" " width="770" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and he also can't execute some ssh command on the Server, it always return this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /va-bash: cannot create temp file for here-document: No space left on device
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then, i told him to run this command for check directory file size :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;du -h --max-depth=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;this is example of the output from command above, picture taken from &lt;a href="https://pbs.twimg.com/media/E_zsqSNUUAcsVLP.jpg" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa74c0n8ual7hcm0n1fbd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa74c0n8ual7hcm0n1fbd.jpg" alt=" " width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;at this moment, he can running &lt;em&gt;rm -rf&lt;/em&gt; to cleanup some Files / Directory and free up some space.&lt;/p&gt;

&lt;p&gt;now the Server running well and the website up again.&lt;/p&gt;

&lt;p&gt;The next day, he sent message to me, the disk on the server running out of space again, and he asked me to increase the size of the disk.&lt;/p&gt;

&lt;p&gt;Everything getting worse here, and I've made my mistake, I'm in rush and ask him permission to shutdown the instance for a while, then I increase the size of the Disk from Google Cloud Console page.&lt;/p&gt;

&lt;p&gt;after I boot up the instance again, we can't connect to the SSH (using Putty / WinSCP), the server refuse our public key and user profile :&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9txo3w4aevlp03y6fsu.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9txo3w4aevlp03y6fsu.jpeg" alt=" " width="586" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the journey begin...&lt;/p&gt;
&lt;h2&gt;
  
  
  Attempt to Solve
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Restart the Instance
&lt;/h3&gt;

&lt;p&gt;The first thing I do is Restart the server, and hope with this little magic will fix the SSH Connect issue, but unfortunately everything still not working.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Connect to SSH in Google Cloud Console
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekmuaq5hmcw6rc2o6yye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekmuaq5hmcw6rc2o6yye.png" alt=" " width="152" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then I try to connect the SSH from Google Cloud Console Instance Page, and Still no luck, Connection Failed&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6sl1k868atptcf9tdi6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6sl1k868atptcf9tdi6.png" alt=" " width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Connect to Serial Console
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwll5ky9zmczr1q0gjmi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwll5ky9zmczr1q0gjmi.png" alt=" " width="408" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So I start to Google my problem, and found some answer from Serverfault and Stackoverflow to turn on the Serial Console and try to login from there using this Startup Script :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
adduser USERNAME
echo USERNAME:PASSWORD | chpasswd
usermod -aG google-sudoers USERNAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unfortunately, the startup script Failed to Running, and I see this error from Serial Console :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GCEGuestAgent[1121]: 2022-02-28T14:40:50.0037+07:00 GCEGuestAgent Error non_windows_accounts.go:144: Error creating user: useradd: /etc/passwd.1520: No space left on device#012useradd: cannot lock /etc/passwd; try again later.#012.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start to PANIC... then I think that we locked to the Server and almost can't do anything.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Moment of Silent...&lt;/em&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  4. Create new 'restore' disk with old disk snapshoot
&lt;/h3&gt;

&lt;p&gt;After read some related answer on the forum that I've got from google, I try to create new Disk with bigger Storage Size (from 10GB to 20GB) and using old disk snapshoot as the source&lt;/p&gt;

&lt;p&gt;I hope with this new Disk and bigger size everything will run smoothly when I attach this new disk as Boot disk to the Instance without losing any existing data. But Unfortunately, when the server run, we still Refused from the Server by using Putty / WinSCP and also Google Cloud Console SSH.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;Almost gave up, but I will try my last Shoot. and YEAH!! it's working. &lt;/p&gt;

&lt;p&gt;here is the step that I do to solve this problem&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Create new Instance with new Disk and new OS Image
&lt;/h3&gt;

&lt;p&gt;Creating new Instance with new Disk with Ubuntu and start it, make sure I can connect using SSH.&lt;/p&gt;

&lt;p&gt;the idea is to Attach the old disk to this new Instance as additional disk&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Attach old 'fully' disk as additional disk to the new Instance
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1pwmuq3o4hsiya48mvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1pwmuq3o4hsiya48mvv.png" alt=" " width="442" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;go to Compute Engine Instance List, and choose the new instance then click the edit button.&lt;/p&gt;

&lt;p&gt;find the Additional Disk Section and Click Attach Existing Disk, then choose the old disk.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Make sure to detach the old disk from Boot Disk at the main instance&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Mount disk to filesystem folder
&lt;/h3&gt;

&lt;p&gt;After attach the old disk, run this command to check if the disk Exist / loaded in the instance :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lsblk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and it will show the list of the disk partition&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7bkl7zadpvv11uo4zte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7bkl7zadpvv11uo4zte.png" alt=" " width="502" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;in my case, the old disk identified by &lt;strong&gt;sdb&lt;/strong&gt; and the partition that we need to fix is at &lt;strong&gt;sdb1&lt;/strong&gt; to filesystem, let's say we mount it to &lt;strong&gt;/myOldDisk&lt;/strong&gt; in the new instance&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create myOldDisk folder by execute this command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /myOldDisk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;mount sdb1 to /myOldDisk
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount -o discard,defaults /dev/sdb1 /myOldDisk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;configure permission
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chmod a+w /myOldDisk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;run sudo lsblk again, now I can see my sdb1 mounted to /myOldDisk&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Clean up or backup to local computer some files for free up more space in the mounted folder
&lt;/h3&gt;

&lt;p&gt;since our old Disk mounted to &lt;strong&gt;/myOldDisk&lt;/strong&gt; I can easily go to the directory then cleanup some files / folder to get more free space&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /myOldDisk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then I can find some big files to delete or archive it as zip and download it to my local computer&lt;/p&gt;

&lt;p&gt;I get 500MB+ free space and I thing it's save to go&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Umount the old disk
&lt;/h3&gt;

&lt;p&gt;unmount the sdb1 from filesystem by running this command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;umount /dev/sdb1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;detach my old disk using Google Cloud Console at this new Instance Detail page, then we can shutdown and delete this new Instance &amp;amp; new Disk, we don't need that anymore.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Increase the old disk size at Google Cloud Console
&lt;/h3&gt;

&lt;p&gt;Go to Google Cloud Console Disk Page, and select the old Disk, and increase size, in my case is from 10GB to 30GB, and save it.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Attach the old disk to the old Instance
&lt;/h3&gt;

&lt;p&gt;I need to Attach my old Disk to my old Instance as Boot Disk, go to Compute Engine Instance Detail, edit the instance and add the Boot Disk, select my old Disk.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Run the instance and Growpart the Partition
&lt;/h3&gt;

&lt;p&gt;Turn on the Instance, Finally now I can login SHH to the Server,&lt;br&gt;
when I check the free storage by running &lt;strong&gt;df -h&lt;/strong&gt; command, the storage space still 10GB.&lt;/p&gt;

&lt;p&gt;my next step is to Update the disk partition by using Growpart from 10GB (existing) to the new size 30GB&lt;/p&gt;

&lt;p&gt;I can check the new size by running :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lsblk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will show new &lt;strong&gt;dev/sda&lt;/strong&gt; storage size, all i Need to do is Grow the Partition by running this command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;growpart /dev/sda 1
CHANGED: partition=1 start=2048 old: size=419428319 end=419430367 new: size=2097149919
,end=2097151967
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then I need to Resize the Partition by execute this command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo resize2fs /dev/sda1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;it will return something like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resize2fs 1.42.13 (28-Feb-2022)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old_desc_blocks = 13, new_desc_blocks = 63
The filesystem on /dev/sda1 is now 262143739 (4k) blocks long.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally Done ! now the Disk have extra space, I can verify the available / free storage by running &lt;strong&gt;df -h&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It's time to rest and drink some Coffee..&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;from this Experience I Learn : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Never ever Shutdown your 'full storage' server, Free up some space before increasing the disk size, and make sure everything is fine before the shutdown.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You don't need to shutdown the server for increasing the storage Disk&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm so sorry for my bad English, and I hope this post will be guide Me or Someone else when facing the same problem in the future.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;if you have suggestion about this post feel free to write it down in the comment section below :)&lt;/em&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ubuntu</category>
      <category>ssh</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>Deploy ReactJS Production Build with PM2</title>
      <dc:creator>Taufiq Abdullah</dc:creator>
      <pubDate>Mon, 10 Aug 2020 04:45:41 +0000</pubDate>
      <link>https://dev.to/taufiqtab/deploy-reactjs-production-build-with-pm2-5dfo</link>
      <guid>https://dev.to/taufiqtab/deploy-reactjs-production-build-with-pm2-5dfo</guid>
      <description>&lt;p&gt;Hello, This is my very first Post in Dev.to and i want to share about how to deploy ReactJS Production Build using VPS with PM2&lt;/p&gt;

&lt;h3&gt;
  
  
  Server Environment :
&lt;/h3&gt;

&lt;p&gt;OS : Ubuntu 18.04.4 LTS&lt;br&gt;
NodeJS : 10.19.0&lt;br&gt;
NPM : 6.14.2&lt;/p&gt;
&lt;h2&gt;
  
  
  1. Build it
&lt;/h2&gt;

&lt;p&gt;Make sure you build it (using yarn build / npm run build)&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Upload build file to VPS
&lt;/h2&gt;

&lt;p&gt;in this step, you can upload to your vps, in my case i put it in /var/www/myReactApp&lt;/p&gt;
&lt;h2&gt;
  
  
  3. Install PM2
&lt;/h2&gt;

&lt;p&gt;you need pm2 to serve the apps, by using this command in terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo npm install pm2 -g
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Run PM2 Command
&lt;/h2&gt;

&lt;p&gt;this is the pm2 command to serve&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 serve &amp;lt;path&amp;gt; &amp;lt;port&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now we need to put our project in the command by calling&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 serve myReactApp/ 3000 --name "my-react-app" --spa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;myReactApp/ : folder of the app&lt;/li&gt;
&lt;li&gt;3000 : the port for serve&lt;/li&gt;
&lt;li&gt;"my-react-app" : name for PM2 Process, will visible in "pm2 list"&lt;/li&gt;
&lt;li&gt;--spa : parameter for Single Page Application, redirect to root URL&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Apps Running
&lt;/h2&gt;

&lt;p&gt;now your apps Running on the port :3000, we can open browser and access to yourdomain.com:3000 or by using your-ip:3000.&lt;/p&gt;

&lt;p&gt;we also can setup the apache sites-enabled to hide the port from url by using ProxyPreserveHost and ProxyPass&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Monitoring Running Apps with PM2
&lt;/h2&gt;

&lt;p&gt;we can see all of our pm2 process by calling this command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we can start, stop or delete process from list by using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 &amp;lt;start/stop/delete&amp;gt; &amp;lt;process id/process name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;example :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 stop my-react-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and we can view pm2 dashboard for monitoring all running process by using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 monit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now our apps deployed :D&lt;br&gt;
i hope this article is useful, and i'm so sorry if there is a mistake and my poor english.&lt;/p&gt;

&lt;p&gt;Have a Nice Day ;) &lt;/p&gt;

</description>
      <category>javascript</category>
      <category>react</category>
      <category>pm2</category>
      <category>ubuntu</category>
    </item>
  </channel>
</rss>
