A few months ago, I opened an online store to check the specifications of a device I wanted to buy.
The page loaded.
The navigation bar appeared.
The product description was visible.
But the product image never showed up.
Instead, there was a small broken-image icon sitting where the picture should have been.
If you’ve ever experienced that moment, you know exactly what happens next.
You refresh the page.
Nothing changes.
And suddenly you begin to wonder:
- Is the website broken?
- Is the product unavailable?
- Is the platform unreliable?
What most users never realize is that a broken image on a website usually isn’t about the website itself.
It’s almost always about where the file is stored and how the storage system is designed.
Behind every image, downloadable document, and product asset is a storage system responsible for delivering those files instantly.
If that storage system is poorly configured, small things begin to fail:
- images disappear
- downloads stop working
- documentation links break
- websites feel unreliable
Modern cloud platforms solve this using distributed storage infrastructure.
In Microsoft Azure, one of the core services used for this purpose is Azure Blob Storage.
But simply creating a storage account isn’t enough.
Engineers must think about deeper questions:
- What happens if an entire Azure region fails?
- How can users access files without authentication?
- What if someone accidentally deletes important files?
- How do we keep track of document updates?
To explore these questions, I built a high-availability storage architecture for a public website using Azure Storage.
The goal was simple:
Create a system that can:
- survive regional outages
- serve public website assets
- allow anonymous file access
- recover deleted files
- maintain file version history
Let’s walk through how that system was designed.
The Problem Engineers Face
Imagine running an e-commerce website where thousands of customers view product images every day.
Everything works perfectly until one day an Azure region experiences an outage.
Suddenly:
- product images stop loading
- downloadable files fail
- customer experience breaks
The website itself may still be online.
But the storage system serving those files is unavailable.
This is why cloud engineers design storage systems not just to store files, but to ensure they remain available, recoverable, and resilient.
Azure Storage provides several mechanisms to solve these challenges.
Before implementing the solution, I will helps to understand a few important concepts.
Key Concepts Behind Azure Storage
Azure Storage Account
An Azure Storage Account is the top-level container that holds all storage services.
Inside a storage account you can store:
- blobs (files, images, documents)
- file shares
- tables
- queues
For public websites, Blob Storage is typically used to store static assets like images and downloadable files.
High Availability
High availability means designing systems that continue operating even when failures occur.
Failures in cloud infrastructure are expected.
Servers fail.
Networks fail.
Entire regions can fail.
High availability ensures systems remain accessible despite those failures.
Geo-Redundant Storage
Azure provides multiple redundancy options.
One of the most resilient is Read-Access Geo-Redundant Storage (RA-GRS).
This configuration:
- replicates data to another Azure region
- stores multiple copies of data
- allows read access from the secondary region
If the primary region fails, the secondary region can still serve requests.
Thinking Like a Cloud Architect
When designing storage for a public website, engineers typically ask four important questions:
- What happens if an entire region fails?
- Can users access files without authentication?
- What if someone accidentally deletes files?
- How can we track document updates over time?
Azure Storage provides solutions for each of these challenges:
- Geo-redundancy protects against regional outages
- Anonymous blob access allows public content delivery
- Soft delete enables file recovery
- Blob versioning keeps historical file versions
The following implementation combines these capabilities to build a resilient storage backend.
Create a Storage Account With High Availability
Create a storage account to support the public website
In the Azure portal, search for and select Storage accounts.
Select + Create.
For resource group select New, give your resource group a name, and select OK.
Set the storage account name to publicwebsite and add a unique identifier.
Take the default settings for the remaining configuration.
Select Review + Create, then Create.
Wait for deployment and select Go to resource.
Configure High Availability
This storage must remain accessible if a regional outage occurs.
Navigate to Data management → Redundancy.
Select Read-access Geo-redundant storage (RA-GRS).
Review the primary and secondary region information.
This ensures your website assets remain accessible even if the primary region experiences downtime.
Allow Anonymous Access for Public Files
Public website content should be accessible without requiring users to log in.
Navigate to Settings → Configuration.
Enable Allow blob anonymous access.
Save the configuration.
Create a Container for Website Files
Navigate to Data storage → Containers.
Select + Container.
Name the container public and Create.
Configure Anonymous Read Access
Select your container.
Change the access level to:
Blob (anonymous read access for blobs only).
Select OK.
Upload and Test Files
Select Upload.
Choose a file.
Upload the file.
Refresh the container to confirm the file appears.
Copy the file URL and test it in a browser.
Example URL:
https://publicwebsiteproject.blob.core.windows.net/publicc/image.png
Configure Blob Soft Delete
Navigate to the Overview page.
Locate the Blob service section.
Select Blob soft delete.
Enable soft delete.
Set retention to 21 days.
Save the changes.
Restore Deleted Files
Delete a file.
Confirm deletion.
Enable Show deleted blobs.
Restore using Undelete.
Enable Blob Versioning
Navigate to Blob service → Versioning.
Enable versioning.
Save the configuration.
Lessons From This Implementation
Several key lessons stood out while building this storage architecture.
• High availability must be intentional.
• Public access should be carefully controlled.
• Data protection features like soft delete and versioning are essential.
• Testing configurations ensures your infrastructure behaves as expected.
These small architectural decisions are what transform basic cloud setups into production-ready systems.
Final Thoughts
Creating cloud resources is easy.
Designing them responsibly is what defines engineering maturity.
In this implementation we built a storage architecture capable of:
- serving public website assets
- remaining available during regional outages
- allowing secure anonymous access
- protecting files from accidental deletion
- maintaining historical file versions
Behind every reliable digital experience is infrastructure configured with intention.
And mastering these fundamentals is how engineers move from using cloud tools to designing resilient cloud systems.





































Top comments (0)