DEV Community

Kunal
Kunal

Posted on • Originally published at kunalganglani.com

Oracle's AI Earnings Surprise: Why a Revenue Miss Sent the Stock Higher [2024 Analysis]

Oracle's AI Earnings Surprise: Why a Revenue Miss Sent the Stock Higher [2024 Analysis]

Oracle posted $14.3 billion in Q4 FY2024 revenue. That's 3% year-over-year growth. For most enterprise software companies, that number would trigger a selloff. Instead, Oracle's stock surged. The reason has nothing to do with what Oracle earned last quarter and everything to do with what Oracle's AI earnings signal about the next three years.

I've been watching Oracle's cloud infrastructure play closely because it's one of the most interesting bets in enterprise tech right now. A company that most engineers associate with expensive database licenses is suddenly the overflow capacity for OpenAI. That's not a small thing.

Here's what actually happened, why the market loved it, and what it means if you're building on or competing with cloud infrastructure.

Why Did Oracle Stock Go Up After Missing Earnings?

The headline numbers were mediocre. Cloud services and license support revenues rose 9% to $10.2 billion. Total quarterly revenue hit $14.3 billion, up just 3%. By most analyst estimates, Oracle missed on revenue expectations.

But investors weren't looking at the rearview mirror. They were looking at Remaining Performance Obligations (RPO), which surged 44% to $98 billion. RPO represents contracted future revenue. It's the backlog. And $98 billion is a staggering number for a company generating roughly $53 billion in annual revenue.

Safra Catz, CEO at Oracle, said on the earnings call that the company signed "the largest sales contracts in our history, led by huge demand for training large language models." That's not vague forward guidance. That's ink-on-paper commitments from some of the biggest names in AI.

Here's the thing nobody's saying about Oracle's earnings: the revenue miss is almost irrelevant. When your contracted backlog grows 44% in a single quarter, the current quarter's revenue is a lagging indicator. The market got that, even if the headlines didn't.

Microsoft Is Renting Oracle's Cloud for OpenAI. Yes, Really.

This is the deal that made me do a double take. Microsoft, the company that has invested over $13 billion in OpenAI and built Azure into one of the world's largest cloud platforms, is extending Azure into Oracle Cloud Infrastructure to provide additional capacity for OpenAI.

Read that again. Microsoft doesn't have enough cloud capacity of its own to run OpenAI's workloads.

OpenAI CEO Sam Altman confirmed the arrangement, stating that "OCI will extend Azure's platform and enable OpenAI to continue to scale." That's diplomatic language for "we need more GPUs than Microsoft can currently provide."

Larry Ellison, Chairman and CTO at Oracle, described the infrastructure on the earnings call: a massive datacenter, roughly half dedicated to Microsoft, packed with liquid-cooled Nvidia chips designed specifically for large-scale AI model training. Not inferencing. Training. The heavy stuff.

Having worked on systems that depend on cloud infrastructure decisions, I know that a company like Microsoft doesn't outsource compute to a competitor unless the alternative is telling OpenAI to slow down. That's the real signal here. AI training demand has outstripped even Microsoft's supply, and Oracle had the capacity and the architecture to fill the gap.

For years, Oracle Cloud was an afterthought in conversations dominated by AWS, Azure, and Google Cloud. That's changing fast. Not because of marketing. Because Microsoft is literally writing checks to Oracle for GPU time. If you want proof that OCI is technically credible, I don't know what stronger endorsement exists than your biggest competitor becoming your customer. And these infrastructure decisions have real geopolitical and technical consequences that go beyond any single deal.

Oracle's Google Cloud Partnership Changes the Multi-Cloud Game

The Microsoft deal grabbed headlines, but the Google Cloud partnership might matter more long-term.

Oracle announced that Google's Cross-Cloud Interconnect service is now available in 11 OCI regions, and Oracle Database@Google Cloud is coming later in 2024. This means enterprises can run Oracle's database directly on Google's platform with a native, low-latency interconnect between the two clouds.

The 11-region interconnect matters for anyone architecting multi-cloud systems. It's not a marketing partnership. It's plumbing. Real network infrastructure connecting two clouds at the kind of latency that makes cross-cloud workloads viable for production use.

I've built enough distributed systems to know that interconnect quality between clouds is the thing that actually determines whether multi-cloud works or stays a PowerPoint fantasy. Latency, bandwidth, reliability at the network layer. That's what matters. Oracle and Google building this out across 11 regions tells me they're not messing around.

This also fixes a problem that's been annoying enterprises for years. Companies running Oracle databases (and there are millions of them) have historically been locked into Oracle's infrastructure or forced into expensive, painful migrations. Putting Oracle Database natively on Google Cloud gives those enterprises a real multi-cloud option without the migration tax.

For Google, it's a play to attract Oracle-heavy enterprises that might otherwise default to Azure or AWS. For Oracle, it's distribution. OCI gets in front of Google Cloud's customer base without Oracle having to win those customers one by one.

The $98 Billion Question: Can Oracle Convert RPO to Revenue?

Here's where I get skeptical.

A $98 billion RPO backlog is impressive, but RPO is a promise, not revenue. The real test is execution. Can they build datacenters fast enough? Can they procure enough Nvidia GPUs when every hyperscaler and sovereign AI fund on the planet is competing for the same chips? Can OCI's operational reliability hold up under the kind of scale that Microsoft and OpenAI demand?

Oracle's cloud infrastructure has improved dramatically in the past few years, but it's still a fraction of AWS's or Azure's footprint. The company is essentially betting that AI training demand will grow faster than the hyperscalers can build, leaving room for Oracle to capture overflow capacity.

That bet might be right for the next 12 to 18 months. AI training workloads are genuinely supply-constrained right now. But this is a window, not a permanent advantage. Microsoft, Google, and Amazon are all investing tens of billions in datacenter expansion. When their supply catches up to demand, does Oracle still have a compelling offering?

I think the answer depends on whether Oracle can build durable customer relationships during this window or whether it stays the "break glass in case of GPU shortage" option. The Google partnership suggests Oracle is thinking about durability. The database integration creates stickiness that outlasts the current supply crunch.

If you're interested in how AI infrastructure decisions ripple through the developer ecosystem, I wrote about why AI latency matters more than benchmarks for application architecture. It connects directly to why cloud infrastructure choices are more important than most teams realize.

What This Means for Engineers Building on Cloud Infrastructure

If you're making infrastructure decisions right now, Oracle's earnings tell you three things.

First, multi-cloud is no longer a buzzword. Microsoft is running OpenAI workloads on Oracle's cloud. Oracle's database runs natively on Google Cloud. The walls between providers are coming down. Plan for it.

Second, AI training infrastructure is supply-constrained, and that constraint is creating partnerships that would have been unthinkable two years ago. Your cloud provider's GPU availability roadmap matters as much as their pricing. If you're not asking about it, start.

Third, Oracle is a legitimate player in AI infrastructure now. I'm not saying migrate to OCI tomorrow. But if you're evaluating cloud providers for GPU-heavy workloads and OCI isn't in the conversation, you're leaving options on the table.

The most interesting thing about Oracle's quarter isn't the revenue. It's the fact that the company's biggest competitors are now its biggest customers.

I've shipped enough features on cloud infrastructure to know that the provider landscape shifts faster than most engineering teams' evaluation cycles. What was true about OCI two years ago isn't true today. Oracle still has a narrow window to prove this isn't a temporary overflow play but a lasting infrastructure business. The partnerships with Microsoft and Google suggest they're playing for keeps. The $98 billion backlog gives them runway.

But runway isn't a moat. The next four quarters will tell us whether Oracle's AI bet converts into a permanent seat at the hyperscaler table or whether it was the right product at the right time for exactly one cycle. If you're making multi-cloud bets, understanding how AI agent architectures are driving new infrastructure demands is worth your time. The compute demands are only going up.

Oracle hasn't won anything yet. But for the first time in a decade, they're in the conversation. And sometimes, that's how empires start.


Originally published on kunalganglani.com

Top comments (0)