Recently Amazon made headlines among developers. They published an article, outlining why they are ditching their serverless AWS services in favor of a monolithic infrastructure. The headline that this switch will reduce their cloud bill by 90% propelled the article into a viral wave, sparking heavy discussions on the future of serverless infrastructure setups.
This move comes as a surprise to some, after-all Amazon’s own AWS has been a strong proponent and first mover of serverless with the introduction of Lambda functions in 2014.
At Codesphere, we have been pointing out the limitations of serverless for a while now. Instead we believe in making cloud, including complex container landscapes as easy and fast to setup as serverless. Innovation in this field requires deep access, therefore we built low level kubernetes container services from scratch - the benefits are immense but more on that later.
In theory, serverless functions should allow you to scale effortlessly even at scale. Serverless apps still have servers in the background, but the developers do not need to concern themselves with capacity planning, configuration or maintenance of these servers. When an app doesn’t use resources the cloud provider also doesn’t allocate any cluster resources and charges nothing. Users only pay for what they really use.
Sounds like it's too good to be true? Not quite but still true somehow. But one thing at a time, the popularity of this approach has hit all-time highs in recent years. There is a whole industry of startups whose USP basically is to make AWS serverless resources easier to use. One popular example would be Vercel. The allure of running and scaling any Javascript application within seconds, and without any DevOps effort is appealing for both smaller teams, individual hobby developers and corporates alike. Complete with comfortable real-time collaboration features web development suddenly seems like it has gotten a lot easier than it used to be.
For many use cases that promise actually comes through. A lot of hassle can be avoided without sacrificing scalability. However as the Amazon Prime example shows this can come at quite a cost, up to a point where it is no longer feasible to maintain. In their blog article, Amazon Prime Video outlined that their initial infrastructure design for the prime video/audio monitoring service was to have many smaller, distributed, serverless functions (mainly AWS step functions and Lambda functions) that can scale individually. They concede the main reason for this decision was the fast setup compared to alternatives. As it turns out, the scalability was not as smooth as expected, they hit a hard scaling limit at 5% of the expected load capacity. On top, because you are charged by state change, orchestration and data moving between the distributed components made the entire system majorly expensive.
Their article outlines how switching to a “monolithic” infrastructure (everything runs within one AWS EC2 container) reduced their cloud spend by 90%. You might consider this transparency surprising, especially because that’s 90% off their spend on AWS services. Amazon saves money by spending less on Amazon services (I’m sure the irony is not lost to anyone here 😀).
There has been some debate on whether the new infrastructure should actually be called monolithic but since there is plenty of content on that out there, let’s move on to a more general point here. Does this mean any project that needs to work at scale needs to go back to month long infrastructure planning, hiring expensive DevOps engineers to configure and maintain containers and worry about provisioning enough capacity beforehand?
Not necessarily. We are arguing there is a third solution. Just like the Vercel’s of this world who made using AWS serverless functions much easier by standardizing configuration and embedding everything you need for efficient development and operations into a user friendly UI there is an opportunity to do just that for the container infrastructure approach. What might sound like a distant fantasy to some or like a limiting and claustrophobic nightmare like Heroku containers to others is already available for many use cases.
Codesphere provides a modern and flexible infrastructure (built on bare metal instead of reselling cluttered AWS instances) that deploys running, usable servers in seconds with zero configuration, embedded in an environment optimized to improve your team's workflow, while maintaining full scalability. To be fair, there are still some features missing in order to run very complex landscapes with full global auto scaling ability from the UI but we are close. Most use cases can already be handled and the rest will follow later this year. The best thing about it? We are working on relaunching our free-tier after we had to shut it down due to abuse last year! Planned launch is mid/end of June - if we sparked your interest sign up to our waitlist for early access today!
In time the entire software development process will move into the cloud, just like Figma managed to move the design process into the cloud. Making the cloud development experience as good as local is hard, but if that can be achieved the opportunity for improvement is open ended - imagine reducing your build times by 10x or more by allocating flexible cloud resources where they are needed most, more powerful code completion, no more context switching during reviews, instant deployments and much more. Codesphere already achieves much of that for reviews, through near instant preview deployments and a connected cloud IDE for reviews - but of course we are just getting started.
Top comments (0)