A 4-node highly available cluster with distributed storage, software-defined networking, and automatic failover. About 30 minutes to deploy. No Terraform. No Ansible playbooks. No 47-page architecture doc.
That's Canonical's MicroCloud: lightweight, repeatable cluster deployments built for edge computing and distributed workloads. Here's the full walkthrough.
TL;DR: Install 4 snaps, run
microcloud initon one node,microcloud joinon the others, answer the storage/networking wizard, and you have a production-ready HA cluster. Full walkthrough below.
1. What Hardware Do You Actually Need?
Before touching the command line, you need the right hardware for your target environment. MicroCloud is flexible enough to support both lightweight test setups and production clusters.
Test/Development Environments
MicroCloud is lightweight enough that a test setup can start with a single-member deployment. You need as little as 8 GB of RAM per machine, and local storage is often sufficient.
Highly Available Production Environments
For production, redundancy is key. A highly available setup requires at least four physical nodes. Each machine should ideally feature 32 GB of RAM and 10 GB network cards to handle throughput.
Storage Allocation
Within each production node, you should allocate three separate NVMe disks: one for the operating system, one for local storage, and one for distributed storage (via Ceph).
Why four nodes? Separating disk types and maintaining a four-node count provides critical redundancy. If one node goes offline, the extra member ensures the cluster maintains a quorum, preventing a "split-brain" scenario where remaining nodes lose synchronization. Your applications stay reachable and highly available.
2. Install Everything in One Command
The foundation of your MicroCloud cluster is a clean, standardized operating system. Every server you intend to use must be provisioned with an Ubuntu Long-Term Support (LTS) release.
Once your servers are online, install the core services across every node. Canonical packages these components as containerized "snaps":
sudo snap install microcloud lxd microceph microovn
Installing these services as snaps keeps your host operating system clean. All dependencies are bundled directly within the snap container, so you completely avoid package version conflicts.
3. The Cluster Handshake (Zero Manual SSL)
With the software installed, it's time to establish the cluster using an automated handshake process.
Step 1: Initialize the Primary Node
Choose one machine to act as your initiator. SSH into this node and run:
sudo microcloud init
When the wizard asks if you want to set up more than one cluster member, select "yes".
Step 2: Select Internal IPs
A table of available network interfaces will appear. Choose the specific IP address you want to use for MicroCloud's internal cluster communication.
Step 3: Generate the Passphrase
The initiator terminal will generate a secure passphrase and enter a waiting state. Leave this session running.
Step 4: Join the Secondary Nodes
Open new SSH sessions into your remaining nodes and run:
sudo microcloud join
Select the corresponding internal IP address on each node, enter the passphrase generated by the primary node, and verify that the unique fingerprint matches.
This automates trust distribution between your nodes. No manual SSL certificate management. No private key distribution. Return to your initiator terminal, select all the secondary systems that reached out, and accept them into the cluster.
4. Storage: Local Speed + Distributed Safety
MicroCloud lets you pair the high-speed performance of local hardware with the resilience of a distributed cluster. The setup wizard guides you through configuring both.
Local Storage
When prompted to set up local storage pools, select exactly one designated disk from each machine.
Warning: Any disk you select for storage pools must be completely unpartitioned and free of any existing file systems. If you select a pre-formatted disk, the initialization will fail, and you'll need to manually wipe the drives and restart the process.
Distributed Storage
Next, the wizard moves to distributed storage, handled by MicroCeph. You must select at least three unpartitioned disks across the cluster, as Ceph requires a minimum of three to properly replicate data for high availability. You will also be given options to wipe existing data, set up a CephFS distributed file system, or enable disk-level encryption.
This dual-storage strategy lets you run intensive edge workloads on fast local caching while ensuring your primary databases remain safely replicated across the distributed cluster.
5. Networking Without the Networking Pain
With compute and storage locked in, the wizard moves to network configuration, handled by MicroOVN. This software-defined network allows your virtual machines to communicate across the cluster.
External Uplink
The menu will display your physical network interfaces. Select one interface per machine to act as the external uplink. This uplink network must support both broadcast and multicast traffic for distributed networking to function correctly.
Gateways and IP Ranges
You will need to provide the IPv4 (and optionally IPv6) gateway for the uplink network using CIDR notation (e.g., 192.0.2.1/24). Then define a specific block of IPv4 addresses on your network reserved exclusively for your LXD instances.
Tip: MicroCloud uses mDNS (multicast DNS) to automatically discover the initiator node during setup. Some restrictive cloud environments block this traffic. If your secondary nodes can't find the initiator, manually input the initiator's static IP address to bypass the discovery phase and force the handshake.
6. Hit Enter, Get a Cloud
Once you answer the final prompts, MicroCloud takes over. It begins the automated bootstrap process, configuring and launching all requested services across every member of the cluster.
This turns what used to be a complex infrastructure deployment into a simple, repeatable process. Monitor the terminal output; when you see the final message stating "MicroCloud is ready," your multi-node cluster is live, synchronized, and operational.
You can manage everything via the command line, or enable the Canonical LXD graphical user interface for centralized visual control over your instances, networks, and storage pools.
Your cluster is live. From here you can launch containers, spin up VMs, or start deploying workloads to the edge. All on your own hardware, under your own control, at a fraction of what you'd pay a cloud provider.
Anyone running MicroCloud in production? What's your node count and what are you running on it?
Top comments (0)