DEV Community

Philipp Renoth
Philipp Renoth

Posted on

Consider Rust for cloud computing.

The costs from the big public cloud providers went down every year till today, but it looks like 2023 can be the year of the so called cloud-repatriation. To be honest I did not invent that term, but my gut feeling always was, that there will be huge downsides in the future and your company is better prepared when operational costs skyrocket.

Forbidden fruits in the cloud

Companies talked about success stories, when they "lift and shift" everything to the cloud. No datacenter hassle, no hardware orders. Just a few bucks a day and of course high-availability and infinite resources.

The psychology of subscription based payments plays well together with the idea of on-demand cloud resource provisioning and the short-term thinking of companies. Putting a foot into the door of cloud computing is not hard nor expensive, but it felt like that honeypot makes companies easily forget to continuously track objectives and thus may experience their homemade pay-day in the near future.

While the gates of heaven into cloud are open, their providers did not rest, but did a lot of optimizations to reduce costs and give the overall cloud costs for customers a decreasing trend. This may come to an end now as crises are piling up. Those are in particular the aftermath of the pandemic, like global semiconductor shortages, as well as the global energy crisis which is also favored by war.

The long forgotten term of "resource efficiency"

The computer you're sitting in front of might have been one of the fastest ten years ago and will maybe just be scrap in ten years, while today you're using it for some very simple development tasks where setting up a local environment allocates half of your memory and the CPU fans goes up all the time. In my free time I enjoy doing some game development and it's completely insane what a computer can do at 60 frames per second, while on the other hand messed up with some poor implemented CRUD applications.

Such invisible but ubiquitous slowdowns distort any sense about performance and makes us think that hundred concurrent users is a thing for a server. But it's actually not and never was in the last decade, but we got used to technologies with lot's of friction and started investigating how to fix symptoms and not the root cause, because it was cheaper.

One of the large milestones in web development was node.js, which proved that running a script with one thread is enough to serve hundreds of concurrent users, with the ease of JavaScript and even without compilation. A bit later Go came up and showed the same for a native compiled language without decreasing developer experience.

Modern technologies understand that IO should not be thread-bound. Still a very small minority of servers really use all resources efficiently. In common almost all of them use a virtual machine (VM) indirection with a garbage collection (GC), which means memory and performance overhead. Due to optimizations performance can be similar to native speed, but memory is far away from raw native data and now let's go back to the cloud: servers are not utilizing their resources in an efficient way. It's daily business that companies rather pay more for resources and the topic of "paying more than used" didn't carry weight in the past.

The mindset of excessively throwing around resources, because they are cheap, shall end this year. The golden age for resource friendly technologies has started and it's the best time to rethink the past.

Technology stacks at companies

Comparing software technologies regarding all aspects is nearly impossible. There are so many parameters starting with the knowledge and experience of the team(s), going over to operational costs and the calculation for future risks, for example if hiring new employees may be a problem. For good reasons companies tend to not reinvent the wheel every year and do not start a technology stack shift.

Another thing is that even if operational costs decrease, software has to be developed and is very costly. To put everything together your company may optimize costs by using the best technology to get up and running in a short time, so a technology which is good for prototyping and the team is used to.

If you ask experienced developers, a lot of that discussion is just like a religious question and mostly of course the majority of the developers stick to the same technology, because they were hired exactly for that position and so they all pull together.

From a company's perspective having an experienced team working well together is an important asset, but in the long-run dev teams should also consider new paths. It's very important that CTOs do a good job and consider new technologies. Computer science is still a new field and there is so much to learn from the past, that more and more things will show up in the future.

Consider Rust

Finally, for a cloud technology I'd like to list some of the invincible advantages of Rust and mitigate some of the disadvantages:

Pro

  1. Rust is small, fast and secure: Most of the technologies we have in the cloud are lacking at least one of those characteristics. The reason why "security" is mentioned is that C/C++ is also small and fast, but in terms of native technology it's not possible to write secure apps by default, like being memory-safe. Rust has it all and running 15MB alpine linux images (or even from scratch) as containers which e.g. take 10MB idle memory and go up to 50MB memory for peak while utilizing all computation power are normal metrics for services written in Rust. Other technologies may take about factor 10-100 more memory which of course has a bad impact to the running costs. Also any "serverless" idea or "scale to zero" plays well with Rust as service startup is blazing fast.
  2. Rust uses an explicit control flow for error handling as well as concurrency: Making a mess is possible by opting-in with writing code that you're not going to handle the error case or you're not interested at all, but code reviews will show up fishy code. There are no hidden control flow channels, like throwing exceptions and that's for a good reason. Also asynchronous code needs to fit all thread-safety constraints and the so called "async-runtimes" will distinguish between computation-heavy thread based tasks and simple asynchronously glued code. So you can consider Rust code being very expressive all the time, which is good for maintenance.
  3. Software architecture by language design: Reference counting and garbage collection is also a curse. Together with concurrency or parallelism there is no language that can express the way components are allowed to be used in a given multi-threaded context. Rust is very strict about ownership and usage e.g. immutable or mutable as well as being used from a different thread. That seems to be a huge constraint, but it's an overall positive game-changer for software design and architecture. The way you have to manage resources feels very natural after getting used to it and then unrestricted, shared components seems like fishy design.
  4. Rust has a modern tooling: So far all main platforms and many, many more are supported. Rust projects are built according to conventions and the common build tool for many years is cargo and the community loves it.

Contra

  1. Rust has a steep learning curve: Just no! You may have seen code where lifetimes and generics are used, but using Rust from userland you're not supposed to write such code and probably not having to use e.g. explicit lifetimes at all. Coming from a garbage-collected (GC) language, you may argue that in Rust things are not possible anymore. That's true, but simply don't try to solve it the same way with a GC language. The solution with your Rust framework is much easier, faster and less error-prone, I promise.
  2. Rust builds take long: First of all, this is known to the community and folks work on it, but it's the current, actual price to pay for other benefits, like e.g. borrow-checking is a coherent part of Rust, as well as of none other system language. Second, incremental builds are typically fast enough (1-5s) for development cycles and can be optimized for large-scale projects, like using unoptimized dynamic linking for debug and optimized static linking for production.
  3. Rust's ecosystem is still immature: I cannot deny this, but the overall code quality in general is quite encouraging as we have compiler guarantees to exclude categories of CVEs like overflow or memory corruption is not possible by default and as the language is very expressive and strict, it's harder to make bugs. One of the main problems is that we still need more companies to use Rust from userland and that's probably where you can jump in, by evaluating and suggesting Rust as a new technology.

Conclusion

Once companies read about success stories of other companies and Rust developers are not rare anymore, it soon gets a piece of cake to onboard others. Of course the first show cases were in the embedded world, as well as C/C++ projects calling for an alternative like Linux kernel or Android. Now we should recap and understand that the power of system languages can still be utilized for almost everything, but in the meantime we got smarter about handling the category of memory related issues, which was the show-stopper for system languages in all areas where we had better alternatives.

What do you think about the future?

Top comments (0)