DEV Community

Cover image for When “Private” Files Leak: How I Fixed a Silent Azure Storage Misconfiguration
Mahmud Seidu Babatunde
Mahmud Seidu Babatunde

Posted on

When “Private” Files Leak: How I Fixed a Silent Azure Storage Misconfiguration

Most storage failures aren’t caused by hackers, they’re caused by small configuration decisions engineers overlook.

A few years ago, a company accidentally exposed internal documents to the public.

Not because of a breach.
Not because of an attack.

But because of a simple misconfiguration.

Files that were meant to be private became publicly accessible through a URL.

No authentication. No restriction.

Just access.

That’s the reality of cloud systems.

Security failures are rarely loud, they are quiet, and often invisible until it’s too late.

This isn’t just a lab exercise, this is the same type of design decision engineers make in real systems where data must remain secure, available, and controlled.

In this article, I’ll walk through how I built a secure, highly available Azure Storage system for private company documents, and more importantly:

why each configuration matters.

The Problem Engineers Actually Solve

When dealing with internal company data, engineers are not just thinking:

“Where do I store files?”

They are thinking:

  • What happens if a region fails?
  • How do I prevent public exposure?
  • How do I share files securely and temporarily?
  • How do I reduce storage costs over time?
  • How do I ensure backups exist automatically?

This is what turns storage into architecture.

Key Concepts You Must Understand First

Storage Account

A storage account is the foundation of Azure storage. All files (blobs) live inside it.

High Availability

Systems must continue running even when failures occur. In cloud systems, failure is expected.

Geo-Redundant Storage (GRS)

GRS replicates your data to another region. If one region fails, your data still exists elsewhere.

Private Access

Private access ensures:

  • no anonymous access
  • no public exposure
  • only authorized users can access data

Thinking Like a Cloud Architect

Before building anything, I asked:

  1. How do I survive regional failure?
  2. How do I enforce strict privacy?
  3. How do I allow temporary access securely?
  4. How do I automate cost and backup?

Everything below answers those questions.

Step 1 — Create the Storage Account

In the portal, search for and select Storage accounts.

Azure portal showing Storage accounts search

Select + Create.

Azure create storage account button

Select the resource group created in the previous lab.

Azure resource group selection

Set the storage account name to private and ensure it is unique.

Azure storage account naming

Select Review, then Create.

Azure review and create page

Wait for deployment and select Go to resource.

Azure deployment complete screen

Step 2 — Configure High Availability

This storage requires high availability if there’s a regional outage.

In the storage account, in the Data management section, select the Redundancy blade.

Azure redundancy settings page

Ensure Geo-redundant storage (GRS) is selected.

Azure GRS option selected

Refresh the page.

Azure refreshed redundancy view

Review the primary and secondary location information.

Azure primary and secondary region info

Save your changes.

Azure save redundancy settings

Trade-off: Why GRS Instead of RA-GRS?

RA-GRS allows read access from the secondary region.

However:

  • this is internal storage
  • read failover is not required
  • cost optimization matters

GRS provides redundancy without unnecessary cost.

Good engineering is about choosing the right tool — not the biggest one.

Step 3 — Create a Private Container

In the storage account, in the Data storage section, select the Containers blade.

Azure containers blade

Select + Container.

Azure create container button

Ensure the name is private.

Azure container naming

Ensure access level is Private (no anonymous access)
Select OK and Create.

Azure private access setting

Step 4 — Upload and Verify Access Restriction

Upload a file and test access.

Expected result:

❌ File should NOT open
❌ Browser should return an error

Azure container created

The above confirms your storage is secure.

Step 5 — Configure Secure Temporary Access (SAS)

Select your uploaded file and navigate to Generate SAS.

Azure generate SAS tab

Set permissions to Read only.

Azure SAS permissions

Set expiry to 24 hours.

Azure SAS expiry setting

Generate SAS URL.

Azure generate SAS token

Test access.

Azure SAS file access

SAS allows:

  • temporary access
  • controlled permissions
  • secure sharing

Step 6 — Configure Lifecycle Management (Cost Optimization)

Return to storage account.

Notice default tier is Hot.

Azure default access tier hot

Navigate to Lifecycle management.

Azure lifecycle management blade

Select Add rule.

Azure add lifecycle rule

Set rule name to movetocool.

Azure rule naming

Apply to all blobs and select Next.

Azure rule scope

Set condition: 30 days → Move to Cool tier.

Azure lifecycle rule configured

Step 7 — Configure Backup with Object Replication

Create backup container.

Azure backup container creation

Navigate to Object replication and create rule.

Azure replication rule creation

Set destination storage.

Azure replication destination

Map containers.

Azure replication mapping

Lessons From This Implementation

• Security must be intentional
• Private access should always be default
• Temporary access must be controlled
• Cost optimization should be automated
• Backup should never be manual

Final Thoughts

Cloud systems rarely fail because of missing features.

They fail because of poor configuration decisions.

A storage system can either:

  • expose sensitive data
  • lose critical files
  • or fail silently

Or it can be:

  • secure
  • resilient
  • intentional

The difference is not the tool.

It is the engineer!

Top comments (0)