<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohamed Ahmed</title>
    <description>The latest articles on DEV Community by Mohamed Ahmed (@mohamed_ahmed_dfa64a44f4b).</description>
    <link>https://dev.to/mohamed_ahmed_dfa64a44f4b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mohamed_ahmed_dfa64a44f4b"/>
    <language>en</language>
    <item>
      <title>How I automated Windows Server storage savings (~30%) using NTFS Hard-links</title>
      <dc:creator>Mohamed Ahmed</dc:creator>
      <pubDate>Wed, 11 Mar 2026 13:52:44 +0000</pubDate>
      <link>https://dev.to/mohamed_ahmed_dfa64a44f4b/how-i-automated-windows-server-storage-savings-30-using-ntfs-hard-links-1j6e</link>
      <guid>https://dev.to/mohamed_ahmed_dfa64a44f4b/how-i-automated-windows-server-storage-savings-30-using-ntfs-hard-links-1j6e</guid>
      <description>&lt;p&gt;As an Infrastructure Engineer, one of the most repetitive and annoying alerts I get is: &lt;strong&gt;"Disk Space Full"&lt;/strong&gt; on Windows File Servers. &lt;/p&gt;

&lt;p&gt;Usually, the solutions are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Buy more SAN/NAS storage (Expensive 💸)&lt;/li&gt;
&lt;li&gt;Ask users to delete their duplicate files and backups (A nightmare 🤦‍♂️)&lt;/li&gt;
&lt;li&gt;Use Windows Native Data Deduplication (Heavy and not always suitable for every volume).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I wanted a lightweight, highly controlled solution that runs without messing up the users' workflow. So, I built &lt;strong&gt;CloudShrink&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚙️ How it works under the hood:
&lt;/h3&gt;

&lt;p&gt;Instead of just deleting files, CloudShrink acts as an automated deduplication engine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scanning:&lt;/strong&gt; It scans the target directory and hashes files using &lt;strong&gt;SHA-256&lt;/strong&gt; to find &lt;em&gt;exact&lt;/em&gt; byte-for-byte duplicates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Magic (NTFS Hard-links):&lt;/strong&gt; It keeps the original file, deletes the duplicates, and instantly replaces them with native Windows NTFS hard-links pointing to the original file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Result:&lt;/strong&gt; Storage is reclaimed immediately, but for the user, the files are still exactly where they left them. Zero broken paths, zero data loss.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🛡️ Safety First (Simulation Mode)
&lt;/h3&gt;

&lt;p&gt;Because messing with enterprise file servers is risky, I built a &lt;strong&gt;Simulation Mode&lt;/strong&gt;. You can run a dry-test first, and the tool will generate a PDF audit report showing exactly which files are duplicated and how much space you &lt;em&gt;would&lt;/em&gt; save, without actually modifying a single byte.&lt;/p&gt;

&lt;p&gt;I’ve wrapped the engine into a landing page to make it easier for IT teams to request a demo and see it in action.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;You can check out the simulation demo and the project here:&lt;/strong&gt; [&lt;a href="https://cloudshrink.vercel.app/" rel="noopener noreferrer"&gt;https://cloudshrink.vercel.app/&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;I’d love to get some feedback from other SysAdmins and DevOps folks here. Have you run into any weird edge-cases using NTFS hard-links in large environments? Let me know!&lt;/p&gt;

</description>
      <category>automation</category>
      <category>architecture</category>
      <category>productivity</category>
      <category>software</category>
    </item>
  </channel>
</rss>
