DEV Community

Mohamed Ahmed
Mohamed Ahmed

Posted on

How I automated Windows Server storage savings (~30%) using NTFS Hard-links

As an Infrastructure Engineer, one of the most repetitive and annoying alerts I get is: "Disk Space Full" on Windows File Servers.

Usually, the solutions are:

  1. Buy more SAN/NAS storage (Expensive 💸)
  2. Ask users to delete their duplicate files and backups (A nightmare 🤦‍♂️)
  3. Use Windows Native Data Deduplication (Heavy and not always suitable for every volume).

I wanted a lightweight, highly controlled solution that runs without messing up the users' workflow. So, I built CloudShrink.

⚙️ How it works under the hood:

Instead of just deleting files, CloudShrink acts as an automated deduplication engine:

  • Scanning: It scans the target directory and hashes files using SHA-256 to find exact byte-for-byte duplicates.
  • The Magic (NTFS Hard-links): It keeps the original file, deletes the duplicates, and instantly replaces them with native Windows NTFS hard-links pointing to the original file.
  • The Result: Storage is reclaimed immediately, but for the user, the files are still exactly where they left them. Zero broken paths, zero data loss.

🛡️ Safety First (Simulation Mode)

Because messing with enterprise file servers is risky, I built a Simulation Mode. You can run a dry-test first, and the tool will generate a PDF audit report showing exactly which files are duplicated and how much space you would save, without actually modifying a single byte.

I’ve wrapped the engine into a landing page to make it easier for IT teams to request a demo and see it in action.

🔗 You can check out the simulation demo and the project here: [https://cloudshrink.vercel.app/]

I’d love to get some feedback from other SysAdmins and DevOps folks here. Have you run into any weird edge-cases using NTFS hard-links in large environments? Let me know!

Top comments (0)