DEV Community

Cover image for Why Edge Computing Forced Me to Write Better Code (And Why That's the Future)
Daniel Nwaneri
Daniel Nwaneri

Posted on

Why Edge Computing Forced Me to Write Better Code (And Why That's the Future)

Two articles hit my feed this week that got me thinking: Simon Willison's piece on delivering proven code, and a DEV post about losing our software manners. Both are right about developers getting lazy. But they're missing something critical.

The real constraint isn't ethics or professionalism. It's your bank account.

The Reality Check

I run FPL Hub, a Fantasy Premier League platform serving 2,000+ users with 500K+ API calls daily. When I started, I built it like most developers do—grab whatever frameworks feel good, pull in dependencies freely, ship it and scale later.

Then I got the first Cloudflare Workers bill.

Every millisecond of CPU time costs money. Every wasted loop, every unnecessary dependency, every bloated response—I paid for it. Not my company's AWS account with unlimited budget. My actual money.

That's when I learned what no bootcamp or tutorial ever taught me: waste has a price tag.

The Missing Piece

Simon talks about proving your code works. Daniel talks about the discipline we lost when hardware got cheap. Both appeal to being a good engineer.

But here's what changed my behavior: I couldn't afford to be wasteful.

Edge computing platforms like Cloudflare Workers, AWS Lambda, and Vercel Functions reintroduced constraints through economics:

  • Pay per request
  • Pay per millisecond
  • Pay per GB transferred
  • Hard CPU limits
  • Tight memory constraints

Suddenly, that 50MB dependency for a simple utility function isn't just "bad practice"—it's costing you money on every cold start.

What 500K Daily Calls Taught Me

Before edge computing:

  • Pulled in entire libraries for single functions
  • Didn't worry about response sizes
  • "It works" was good enough
  • Tests were optional (until production broke)

After Cloudflare started billing me:

  • Every dependency justified
  • Response payloads measured in KB
  • Performance profiling became mandatory
  • Tests became insurance against expensive bugs

The 99.9% uptime I maintain isn't because I'm naturally disciplined. It's because downtime and inefficiency directly hit my margins.

The Broader Context

Stack Overflow's 2025 survey shows 92% of developers now use AI coding tools. That's great for velocity. But most of these tools generate code optimized for "working," not for "efficient."

I've seen it on Upwork as a freelancer. Developers dump AI-generated Next.js apps that consume 2GB RAM to show a form. It "works." But run that on a platform with real constraints and you'll burn through your budget in days.

Meanwhile, startups in Nigeria (where I'm based) and across emerging markets can't afford AWS bills that assume infinite resources. We need software that's efficient by default, not by exception.

Edge Computing as the New Constraint

This isn't about going back to 1995 and programming like we have 4MB of RAM. It's about modern platforms reintroducing the economic forcing function that makes efficiency matter:

Cloudflare Workers:

  • 128MB memory limit
  • 50ms CPU time (free tier)
  • Pay per request after free tier
  • Forces you to optimize or go broke

AWS Lambda:

  • Pay per GB-second
  • Cold start costs
  • Every inefficiency visible in your bill

Vercel Edge Functions:

  • Similar constraints
  • Performance is directly tied to cost

These platforms make waste expensive again. Not in abstract "bad practice" terms, but in actual dollar leaving your account.

What This Means For You

If you're building with AI tools, ask yourself:

  1. Can this code run profitably at scale?

    • Not "does it work?"
    • Not even "is it maintainable?"
    • But: "Can I afford to run this?"
  2. What's the cost per request?

    • If you don't know, you're probably overspending
    • Edge platforms make this transparent
  3. Is this dependency worth its weight?

    • That 50MB library costs you on every cold start
    • Tree-shaking isn't enough—start with less

The Future Is Constrained (Again)

The developers winning in 2025 aren't the ones generating the most code. They're the ones building systems that are:

  • Efficient enough to run profitably
  • Lean enough to deploy globally
  • Fast enough to compete

Edge computing isn't just a deployment target. It's a forcing function that brings back the discipline we lost when "just add more RAM" became the default answer.

My VPN-Workers project (targeting Nigerian users where bandwidth is expensive) would be impossible with bloated traditional architecture. But on Cloudflare Workers, where every byte and millisecond is accounted for, it's not just possible—it's profitable.

The Bottom Line

We didn't lose our software manners because we're lazy. We lost them because nothing punished waste.

Edge computing brings back consequences. Not through shame or ethics, but through your P&L statement.

And honestly? That's the constraint that actually changes behavior.


What's your experience with serverless costs? Are economic constraints making you a better developer, or just adding stress? Drop your thoughts below.

Top comments (12)

Collapse
 
leob profile image
leob • Edited

Interesting, great piece - I just wonder how you go about optimizing, because your bill does not tell you exactly where that inefficiency comes from - all it would specify are network bandwidth usage, RAM usage, CPU usage ...

Do you try to optimize as much as you can "up front", before going live? Or do you see that your bill is "high" (but what's 'high' or 'low'?), and then you try to figure out what could be causing that?

In other words, I wonder what the recommended approach/methodology here would be ...

Collapse
 
dannwaneri profile image
Daniel Nwaneri

Good question—I learned this the hard way.

Upfront (before deploy):

  • Profile locally with wrangler tail to catch obvious slowness
  • Question every dependency. That 200KB library for one util function? Nope.
  • Benchmark critical paths: "Can I get this under 20ms?"

In production:
Cloudflare's dashboard shows which endpoints are expensive. When something spikes, I check logs, profile that specific route, fix the obvious stuff (unnecessary API calls, bloated responses), deploy, monitor.

What's "high"?
For context: FPL Hub does 500K requests/day for ~$15-20/month. Target is under $0.00004 per request. If I see $50+, something's broken.

Real trick: Start lean. Way easier to add dependencies when you need them than rip them out when your bill's already high.

I now think in "cost per feature"—every new function gets a rough estimate of CPU time it'll add.

Collapse
 
leob profile image
leob

Thanks, excellent advice! I wonder if Cloudflare and similar platforms are more "developer friendly" (easier to use), especially their dashboards etc, than AWS - I just find AWS (it's tooling, docs, admin UI, dashboards - basically everything) daunting, but of course that's also because it's just a huge platform ...

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

Oh 100%. AWS is a nightmare if you're trying to move fast.

Cloudflare Workers: wrangler initwrangler deploy and you're live. Dashboard shows requests, CPU time, errors. Done. Docs are actually readable.

AWS Lambda requires understanding IAM, VPCs, API Gateway, CloudWatch... just to deploy a function. Then the bill is hieroglyphics.

I think edge platforms learned from AWS's mistakes. They optimized for "developer wants to ship code" instead of "enterprise architect designing distributed systems."

For FPL Hub, I'm one person—can't spend days configuring AWS infrastructure. Cloudflare was the only realistic choice.

Thread Thread
 
leob profile image
leob

Haha okay, what you say about AWS (specifically Lambda) resonates heavily with me - I'm involved in a project where we use AWS Lambda (and indeed CloudWatch), but we're using a tool/platform which simplifies things, and I've tried to stay away from VPC, avoiding all that complexity like the plague whenever possible - but, the complexity is evident nonetheless ...

Project isn't live anyway (far from it) although that's mostly because of NON-technical reasons ;-) but it does make me question the decision to use AWS, especially because no cloud but just a VPS would probably also get the job done ;-)

Yeah AWS is a beast, it's horribly complex, I think only people who are certified in it can "love" it, or tech startups/companies that are very "technical", enterprise architects as you say - mere mortals, independent developers, no not really ;-)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

Yeah, using a tool to abstract AWS is basically admitting "this is too complex" 😆
The VPS vs serverless debate is real tho. For your case. if it's not even live yet and you're questioning AWS.honestly just spin up a $5 Digital Ocean droplet and ship it. You can always migrate later if you actually need Lambda's scale.
I only went serverless because FPL Hub hit scale problems early. If I was at 1K requests/day? VPS all the way. Simpler, predictable costs, SSH in and fix things.
The dirty secret. most projects claiming they "need" serverless don't. They need to ship and get users first. Premature scaling is real.
Cloudflare Workers worked for me because I was already at scale and needed it. But if you're not live? Don't torture yourself with AWS complexity for hypothetical traffic.
Ship on a VPS, prove the product works, scale later if you actually need to.

Thread Thread
 
leob profile image
leob • Edited

Wow this hits home lol - you're just confirming what I was intuitively already thinking ...

"For your case. if it's not even live yet and you're questioning AWS.honestly just spin up a $5 Digital Ocean droplet and ship it. You can always migrate later if you actually need Lambda's scale."

Yeah maybe we should, and the rest of what you're saying in your comment is also spot on - I'm honestly baulking at AWS's ridiculous complexity, even though the tool we use sweeps it pretty much under the rug ...

And for now there's nothing that indicates we even remotely need this kind of "scale" - Lambda, for us, right now, is more or less just hype/YAGNI/premature optimization ...

The good thing is that our app should be easy to "port" to a VPS, there's no AWS vendor lock-in yet - and if ever we need it, it should be easy to "port it back" to AWS/Lambda ...

Yeah the insane amount of complexity of the AWS platform is pretty much putting me off, mainly because I'm just not interested in becoming an "AWS expert" ;-)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

"I'm just not interested in becoming an 'AWS expert'"
This is the real cost nobody talks about. Learning AWS isn't a weekend.it's a career specialization.
If you're building a product, every hour learning IAM policies is an hour NOT building features or talking to users. That's the actual waste.
Since you have no vendor lock-in yet, honestly just ship on the VPS. Get real users, real feedback, real revenue . If you blow up and actually need Lambda's scale. great problem to have, migrate then.You already know what to do. Pull the trigger lol.

Thread Thread
 
leob profile image
leob

Nailed it ... !

Collapse
 
nadeem_rider profile image
Nadeem Zia

good information

Collapse
 
rajatarora profile image
Rajat Arora

Even if we're not building for edge computing, writing efficient software always has its merits.

Not everyone can afford a macbook with oodles of RAM. So if your next.js app consumes 2GB of it for rendering a form, then maybe the form crashes your user's laptop. Hey, you just lost a customer!

If someone trying to load your 100 MB SPA on a shitty mobile connection - they get frustrated after a minute and stop trying. That's another customer lost!

Always write efficient software, people!

Collapse
 
dannwaneri profile image
Daniel Nwaneri

Absolutely! You nailed the accessibility angle I should've emphasized more.

Edge computing forced me to care about efficiency, but you're right—it matters everywhere. The Nigerian users I'm targeting with VPN-Workers often deal with:

  • Expensive mobile data (pay-per-MB)
  • Throttled connections
  • Older Android devices with 2-4GB RAM

A bloated SPA doesn't just frustrate them—it literally prices them out.

The irony is that edge platforms' economic constraints (pay per millisecond) align perfectly with user constraints (limited bandwidth, older hardware). Building for one automatically serves the other.

We got spoiled thinking everyone has fiber internet and M3 MacBooks. They don't.