DEV Community

vaibhav bedi
vaibhav bedi

Posted on

Microsoft Azure vs OCI Networking: A Deep Dive

Microsoft Azure vs OCI Networking: A Deep Dive

So you're evaluating cloud providers and you've gotten past the usual suspects. Azure's everywhere, obviously, but Oracle Cloud Infrastructure keeps popping up in conversations, especially when people talk about networking performance and cost. I spent the last few months working with both platforms pretty heavily, and honestly, the networking models are different enough that it's worth digging into the details.

The Mental Models Are Different

This is the thing that hit me first. Azure and OCI approach networking from fundamentally different philosophies, and once you understand that, everything else makes more sense.

Azure feels like it evolved. Because it did. You've got Virtual Networks (VNets), but then you've also got Classic VNets (deprecated but still haunting documentation), then Service Endpoints, then Private Link, then VNet peering, then Virtual WAN. Each feature was added to solve a problem, and while they all work, you're dealing with layers of abstraction that sometimes feel like archaeological strata.

OCI feels like someone sat down and said "what if we designed this from scratch in 2016, knowing everything we know now?" The result is cleaner but less forgiving if you don't understand the fundamentals. There's a Virtual Cloud Network (VCN), and that's pretty much it. Everything else is routing rules, security lists, and network security groups. It's more Unix-philosophy: do one thing well.

VNets vs VCNs: The Foundation

Azure VNets give you a /16 by default, though you can go smaller or bigger. You carve them up into subnets, and subnets are where you actually attach resources. Subnets can span availability zones, which is both convenient and slightly terrifying from a failure domain perspective.

OCI VCNs also start with a CIDR block (up to /16), but subnets work differently. Each subnet lives in a specific availability domain (OCI's term for AZ), and you explicitly choose whether it's public or private when you create it. This forces you to think about your architecture upfront, which I've learned to appreciate even when it's annoying.

The addressing flexibility in Azure is better if you're doing complex hybrid scenarios. Azure lets you modify address spaces on existing VNets, add address ranges, even swap them around. OCI is stricter - you define your CIDR blocks upfront and you're mostly stuck with them.

The Peering Story

Azure VNet Peering is straightforward: you peer two VNets, traffic flows between them at Azure backbone speeds, and you pay for data transfer. You can peer globally, which is legitimately useful. The gotcha is that peering is non-transitive by default, so if VNet A peers with VNet B, and VNet B peers with VNet C, A and C can't talk. You need to set up a hub-and-spoke with NVAs or use Virtual WAN to get around this.

OCI uses Local Peering Gateways (LPG) for VCNs in the same region and Dynamic Routing Gateways (DRG) for cross-region. The DRG is actually pretty slick - it acts as a regional router that can connect multiple VCNs, on-premises networks via FastConnect, and even VCNs in other regions. Transitivity is built-in if you route through a DRG, which saves a lot of headache.

One thing that surprised me: OCI's inter-region traffic between VCNs is free if you use their backbone. Azure charges you for cross-region VNet peering. When you're moving serious data between regions, this adds up.

Internet Connectivity and NAT

Azure gives you a few options. You can assign public IPs directly to resources, use a NAT Gateway for outbound from private subnets, or run traffic through an NVA. The NAT Gateway is fully managed and priced per hour plus data processing. It's fine, works as expected.

OCI has NAT Gateways too, but also Internet Gateways. The difference matters: an Internet Gateway is for resources with public IPs that need inbound and outbound access. A NAT Gateway is for private resources that only need outbound. You attach these to your VCN's route tables. Coming from AWS, this felt familiar. Coming from Azure, it felt like extra steps, but it gives you more granular control.

One weird OCI quirk: you can attach a public IP to a resource in a private subnet, but it won't work unless you have the right route table rules. This has bitten me exactly once, and that was enough.

Security: NSGs and All That

Azure Network Security Groups attach to subnets or individual NICs. Rules are priority-based (lower number = higher priority), and you get default rules you can't delete. The portal UI for managing complex NSG rules is... not great. I end up using ARM templates or Terraform for anything non-trivial.

OCI has both Security Lists and Network Security Groups. Security Lists are the old way, applied at the subnet level. NSGs are newer, more flexible, and work more like AWS security groups - they're stateful, you attach them to VNIC level, and they're generally the preferred approach now. The documentation pushes you toward NSGs, and you should listen.

OCI's default security posture is more locked down. Azure tends to be permissive by default (especially with PaaS services), and you lock things down. OCI makes you explicitly allow traffic. Neither approach is wrong, but you need to know which world you're in.

Load Balancing

Azure's got Application Gateway (L7, WAF-capable), Load Balancer (L4, regional or global with cross-region), and Front Door (global L7 with CDN). Front Door is genuinely great for global applications, but the pricing can shock you if you're not careful.

OCI has Load Balancers (L4/L7 in the same service, which is cleaner conceptually) and Network Load Balancers for ultra-low-latency L4. The performance on OCI's Network Load Balancers is wild - single-digit microsecond latency in some scenarios. If you're doing high-frequency trading or real-time gaming, this matters. For most of us, it's overkill.

Configuration-wise, Azure's load balancers have more knobs to turn. OCI's are simpler but less flexible. Pick your poison.

Private Connectivity to Other Services

Azure's story here has evolved into Private Link, which is honestly elegant once you understand it. You create a Private Endpoint in your VNet, it gets a private IP, and you can connect to Azure PaaS services or your own services over the Azure backbone. No public internet, no service endpoints weirdness, just private IPs. The DNS side can be tricky, though. You need private DNS zones, and linking them correctly to your VNets is a common source of frustration.

OCI uses Service Gateways for private access to Oracle Services (Object Storage, etc.) without traversing the internet. It's more limited in scope than Private Link but works well for what it covers. For your own services, you're generally using private IPs within your VCN and relying on the network fabric.

Hybrid Connectivity

Azure ExpressRoute is mature, widely available, and works with basically every major carrier. You get private peering for your VNets, Microsoft peering for M365/Dynamics, and options for Global Reach to connect your on-prem locations through Azure's backbone. The pricing is per port plus data transfer, and it gets expensive fast in higher tiers.

OCI FastConnect is similar conceptually but simpler in practice. You connect to OCI, you get access to your VCNs via DRG, done. The pricing is generally lower than ExpressRoute, especially for higher bandwidths. OCI also has some interesting partnerships with Azure for direct interconnection between the two clouds, which is useful if you're running Oracle databases in OCI but everything else in Azure.

The Azure-OCI Interconnect deserves its own mention. Microsoft and Oracle partnered to create dedicated connections between their clouds in certain regions. If you're running Oracle databases in OCI and need to connect them to Azure-hosted apps, this is way better than going over the internet. Latency is single-digit milliseconds in supported regions.

DNS and Service Discovery

Azure DNS is solid. You can host public zones, create private zones for internal resolution, and link private zones to VNets. The integration with Private Link means DNS just works for private endpoints, once you've set up the zones.

OCI uses a resolver model. Each VCN has a DNS resolver, and you can configure custom resolvers for hybrid scenarios. It works, but it feels less polished than Azure's approach. For complex hybrid DNS scenarios, you'll probably end up running your own DNS infrastructure in both clouds.

Observability

Azure Network Watcher gives you topology views, packet capture, connection troubleshooting, NSG flow logs, and Traffic Analytics. Flow logs go to Log Analytics, and you can query them with KQL. It's comprehensive but can be overwhelming. The cost of storing flow logs in Log Analytics also sneaks up on you.

OCI's VCN Flow Logs are simpler. You enable them per subnet, they dump to Object Storage or Logging service, and you parse them yourself. There's less built-in analysis, but the raw logs are cheaper to store. If you're comfortable with log analysis tools, this is fine. If you want dashboards out of the box, Azure's further along.

Performance and Cost

This is where things get spicy. OCI's network backbone is genuinely fast. They built it recently with modern hardware, and it shows. For workloads where network latency and throughput matter - databases, especially - OCI often performs better than Azure in benchmarks.

Azure's network is more geographically distributed, though. If you need presence in 60+ regions, Azure's got you covered. OCI is growing fast but still has fewer regions.

On cost, OCI is usually cheaper for raw compute and egress. Azure's egress charges are brutal - $0.087/GB for the first 10TB in most regions. OCI charges $0.0085/GB for the first 10TB. That's not a typo. For data-intensive workloads, this difference is massive.

The Verdict?

There isn't one, not really. Azure makes sense if you're already in the Microsoft ecosystem, need global coverage, or want mature PaaS services. The networking is complex but powerful, and you can do basically anything if you're willing to learn the 47 different ways to connect things.

OCI makes sense if you're cost-conscious, running Oracle databases, or need predictable high performance. The networking is simpler but more opinionated. You'll spend less time fighting abstractions and more time understanding IP routing, which might be a plus or minus depending on your perspective.

For what it's worth, I've stopped thinking about this as an either/or question. A lot of companies are running both, using Azure for general workloads and OCI for Oracle databases or cost-sensitive batch processing. The Azure-OCI interconnect makes this surprisingly practical.

The real advice? Spin up free tiers in both, build a simple multi-tier app, and see which model clicks for you. Networking is one of those things where hands-on experience beats any article, including this one.

Top comments (0)