The mental model you start with determines every architecture decision that follows. Most teams are starting with the wrong one.
Edge computing used to mean "some devices send telemetry to the cloud."
That era is over.
This is a re-post of Bruno Baloi's blog Part 1: The Edge Isn't a Place - It's an Operating Reality on Synadia.com.
Today's edge is a full operational domain where the physical world meets software systems: machines, vehicles, gateways, sensors, factories, field deployments. And once you move compute and messaging into that world, the rules change fast. Connections drop. Environments get hostile. Data gets generated faster than it can be forwarded. And the assumptions baked into a decade of cloud-native architecture patterns start failing in ways that are hard to diagnose because they fail quietly.
The first and most important shift is not a technical one, it's a conceptual one.
The edge is not a far-away part of your system. It's a different operating dimension entirely.
Why the geography framing gets you into trouble
When engineers hear "edge computing," the mental image is usually spatial: devices on the left, cloud on the right, data flowing between them. The edge is just the far end of the pipeline.
That framing seems harmless until you start making architecture decisions based on it. If edge is just far-away infrastructure, you design for distance — low latency, efficient serialization, maybe some compression. You optimize the happy path.
What you don't design for is disconnection as a first-class operating condition. Or physical exposure as a security assumption. Or the possibility that the "far end" of your system is running on hardware installed by a third-party contractor in an industrial cabinet that nobody has touched in eighteen months.
Those aren't edge cases at the edge. They're normal operating conditions. And the gap between "optimized for distance" and "designed for that reality" is where most edge architecture problems live.
The four constraints that don't go away
If you're building edge-to-core systems, you're going to run into the same four problems regardless of industry, scale, or stack. Synadia's Living on the Edge white paper names them clearly, and they're worth exploring:
Connectivity. Edge links are intermittent by nature — not by failure. Devices roam in and out of coverage. Field gateways lose upstream access during maintenance windows. Vehicles move through dead zones. The question isn't whether your edge nodes will disconnect; it's whether your architecture treats that as an exception to handle or a condition to design for.
Security. Edge devices are physically exposed in ways that data center hardware never is. They're accessible to anyone with physical proximity: maintenance crews, contractors, hostile actors with a USB drive and ten minutes to spare. Credentials get copied. Firmware gets tampered with. Unlike a compromised cloud instance that stays logically contained, a compromised edge device has a direct path toward your core systems if you haven't designed the boundary carefully.
Distribution. Edge environments generate data at rates and granularities that cores can't absorb naively. A manufacturing floor streaming sensor data from hundreds of machines isn't a throughput problem — it's a routing and filtering problem. The right data needs to reach the right destination at the right rate, which means the system has to be opinionated about what crosses the boundary and at what volume.
Observability. You need a real-time view across devices and the infrastructure connecting them to core — not just health checks, but end-to-end event traceability. Without it, you're making operational decisions based on incomplete signals, and at the edge, incomplete signals tend to mean delayed incident detection and incorrect root cause analysis.
None of these four constraints are solvable by optimizing the happy path. They require deliberate design choices that change the shape of the architecture.
The reframe that unlocks the right patterns
Once you stop thinking of edge as geography and start thinking of it as a separate operational realm, the right patterns become obvious — or at least, the wrong patterns become obviously wrong.
The core insight, articulated in the Living on the Edge architecture guide, is this: treat edge and core as distinct realms connected by deliberately controlled paths. Not a single distributed system. Not an extended network. Two separate operating environments with an explicit, managed boundary between them.
That separation shows up in three practical ways:
Asynchronous communication instead of synchronous request-reply. HTTP works when the upstream is available and fast. At the edge, you can't assume either. Asynchronous, event-driven communication means edge systems continue operating regardless of upstream availability — they produce events locally and let the transport layer handle delivery when conditions allow.
Store-and-forward instead of retry logic. When a link drops, edge systems shouldn't be hammering a dead connection. They should be writing to a local durable store and resuming forwarding when connectivity restores — at a controlled rate that doesn't overwhelm core consumers. (The second post in this series covers why this distinction matters more than most architects realize.)
Security realm constraints instead of shared credentials and open subjects. The boundary between edge and core should be explicit about what's permitted to cross it. Not "encrypt everything and trust the network" — that's perimeter thinking applied to a perimeter-free environment. The constraint lives in the topology, not just the transport.
This is where eventing becomes the backbone
If those three patterns share a common thread, it's that they all require an eventing layer — not just messaging, but a platform that can handle asynchronous delivery, local durability, stream replication, and subject-level routing as composable primitives rather than as separate infrastructure.
Edge-to-core isn't a "connect things to the cloud" problem. It's a problem of how signals, commands, and durable streams move safely across unreliable, adversarial terrain — in both directions, at scale, with full traceability.
NATS and the Synadia Platform are purpose-built for exactly this architecture: leaf node topologies that treat edge clusters as first-class entities, JetStream for local durability and controlled forwarding, decentralized security for tightly scoped credentials, and end-to-end observability across the full mesh.
Questions worth asking before your next architecture review
Do you have a written definition of what "the edge" means in your system? If the answer is "the devices on the other side of the network," the framing is still geographic. Try: "a separate operational realm with different connectivity, security, and observability characteristics than our core."
Does your architecture document cover the disconnected case explicitly? If the connectivity section assumes the link is up, it's documenting the happy path. The disconnected case is where edge architectures succeed or fail.
Are your edge and core security models the same? If your edge nodes use the same credentials, the same access scope, and the same network trust assumptions as your core services — you haven't built a boundary. You've built a flat network with devices in inconvenient locations.
Do you know what your edge nodes are doing right now? Not in aggregate. Individually. If the answer is "we'd have to look at logs," your observability model was designed for a data center, not an operational edge.
Getting these right doesn't require exotic technology. It requires accepting that the edge is a different operational dimension — and designing for it from the start, not retrofitting resilience onto a system that assumed it wouldn't need any.
This post is the first in a series exploring architecture patterns for resilient edge-to-core systems, based on Synadia's white paper Living on the Edge: Eventing for a New Dimension.
Next up: why "just retry" is the wrong mental model for intermittent connectivity — and what happens when you find out the hard way.
Tags: Edge Computing · Distributed Systems · IoT · Software Architecture · Streaming · Microservices
Top comments (0)