Preface
Working with Azure's blob storage SDK directly can be messy. You're juggling connection strings, handling cryptic errors, and writing the same boilerplate code over and over. After building several Go applications that needed file storage, I got tired of this repetitive dance.
So I built a clean abstraction layer that turns Azure storage operations into simple, readable functions. No more wrestling with SDK quirks or debugging obscure error messages. Just clean code that works.
This isn't another tutorial that copies Microsoft's docs. It's a real-world implementation that I actually use in production, complete with proper error handling, SAS URL generation, and comprehensive testing.
Retrieving Your Azure Credentials
Before we dive into code, you'll need your Azure storage credentials. Head to your Azure portal and navigate to your storage account. Look for "Security + networking" in the left sidebar and click on "Access keys".
Here you'll find two keys (key1 and key2) along with their connection strings. Grab either key and its corresponding connection string - that's all you need. The connection string contains everything: your account name, access key, and endpoints bundled into one neat package.
Copy the connection string and keep it safe. We'll use this to authenticate our custom package with Azure's storage services.
Setting Up the Azure SDK
First, install the Azure SDK for Go:
go get github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
Now let's build our abstraction layer. The core idea is wrapping Azure's client with our own struct that provides cleaner methods:
package azure
import (
"context"
"errors"
"fmt"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"rje-be-golang/config"
)
// Common errors
var (
ErrInvalidConnectionString = errors.New("invalid connection string")
ErrContainerNotFound = errors.New("container not found")
ErrBlobNotFound = errors.New("blob not found")
ErrInvalidInput = errors.New("invalid input parameters")
)
type AzureStorageConfig struct {
Container string
AccessKey string
ConnectionString string
AccountName string
}
// Client represents an Azure Storage client
type Client struct {
client *azblob.Client
azureStorageConfig config.AzureStorageConfig
}
// Config holds configuration for Azure Storage client
type Config struct {
ConnectionString string
}
// NewClient creates a new Azure Storage client
func NewClient(cfg Config, azureStorageConfig config.AzureStorageConfig) (*Client, error) {
if cfg.ConnectionString == "" {
return nil, errors.New("invalid connection string")
}
client, err := azblob.NewClientFromConnectionString(cfg.ConnectionString, nil)
if err != nil {
return nil, fmt.Errorf("creating azure storage client: %w", err)
}
return &Client{
client: client,
azureStorageConfig: azureStorageConfig,
}, nil
}
Core Operations
Here are the main functions that make working with Azure storage simple:
// CreateContainer creates a new container if it doesn't exist
func (c *Client) CreateContainer(ctx context.Context, containerName string) error {
if containerName == "" {
return ErrInvalidInput
}
_, err := c.client.CreateContainer(ctx, containerName, nil)
if err != nil {
return fmt.Errorf("creating container %s: %w", containerName, err)
}
return nil
}
// ContainerExists checks if a container exists
func (c *Client) ContainerExists(ctx context.Context, containerName string) (bool, error) {
if containerName == "" {
return false, ErrInvalidInput
}
containers, err := c.ListContainers(ctx)
if err != nil {
return false, err
}
for _, name := range containers {
if name == containerName {
return true, nil
}
}
return false, nil
}
// UploadBuffer uploads a byte buffer to blob storage
func (c *Client) UploadBuffer(ctx context.Context, containerName, blobPath string, buffer []byte) error {
if containerName == "" || blobPath == "" || buffer == nil {
return ErrInvalidInput
}
_, err := c.client.UploadBuffer(ctx, containerName, blobPath, buffer, nil)
if err != nil {
return fmt.Errorf("uploading blob %s: %w", blobPath, err)
}
return nil
}
These methods wrap the Azure SDK calls with proper validation and error handling. Notice how we validate inputs upfront and provide meaningful error messages instead of letting Azure's cryptic errors bubble up.
Testing the Package
Here's how to test our custom Azure package in action:
func TestAzureStorage(t *testing.T) {
connectionString := "DefaultEndpointsProtocol=https;AccountName=youraccount;AccountKey=your-access-key;EndpointSuffix=core.windows.net"
// Create Azure storage config
azureStorageConfig := config.AzureStorageConfig{
Container: "mycontainer",
AccessKey: "your-access-key",
ConnectionString: connectionString,
AccountName: "youraccount",
}
// Create custom Azure client
azureConfig := azure.Config{
ConnectionString: connectionString,
}
client, err := azure.NewClient(azureConfig, azureStorageConfig)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
ctx := context.TODO()
// Test container creation - check if exists first
exists, err := client.ContainerExists(ctx, "mycontainer")
if err != nil {
t.Fatalf("Failed to check container existence: %v", err)
}
if !exists {
err = client.CreateContainer(ctx, "mycontainer")
if err != nil {
t.Fatalf("Failed to create container: %v", err)
}
t.Log("Container created successfully")
} else {
t.Log("Container already exists, skipping creation")
}
// Test file upload
data := []byte("Hello Azure!")
err = client.UploadBuffer(ctx, "mycontainer", "folder1/subfolder/myfile.txt", data)
if err != nil {
t.Fatalf("Failed to upload file: %v", err)
}
// Test file existence and content retrieval
content, err := client.GetFileContent(ctx, "mycontainer", "folder1/subfolder/myfile.txt")
if err != nil {
t.Fatalf("Failed to get file content: %v", err)
}
if string(content) != "Hello Azure!" {
t.Fatalf("Expected 'Hello Azure!', got '%s'", string(content))
}
// Cleanup
client.DeleteFile(ctx, "mycontainer", "folder1/subfolder/myfile.txt")
client.DeleteContainer(ctx, "mycontainer")
}
This test covers the complete lifecycle: container management, file upload, content verification, and cleanup. It demonstrates how much cleaner our abstraction is compared to working directly with the Azure SDK.
Wrapping Up
Building this Azure storage abstraction has saved me countless hours of debugging and code duplication. The package handles error checking, provides consistent interfaces, and makes Azure storage operations readable and maintainable.
You can extend this further by adding retry logic, bulk operations, or streaming uploads for large files. The foundation is solid, and adding new features becomes straightforward when you have clean abstractions in place.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.