I think it's too early to say much about this.
Maybe the serverless people are right, and serverless tech helps people to build better systems without the need for extra DevOps folk and these small companies will overtake the bigger ones who have pay for and manage their giant IT teams.
Maybe containers and VMs will win because serverless was a false promise and you have to have these giant expensive teams to manage those systems.
I have the feeling the stuff you are talking about, orchestration and scaling, will move into the cloud and people who do their own stuff in-house will burn money and time and lose in the long run to the people who have more money and time left because they didn't do it.
But that's just my opinion and it can be wrong.
You know, i run a pair of servers for one company i also do web development for. We got away from managed servers because it was just too expensive.
I might spend an hour a month on average tinkering on these servers. And both of these servers have to be PCI compliant, so there is a decent amount of work to do, and i have to keep up with security.
This has been my responsibility for over 7 years.
I imagine that when you write an app to work "serverless", you have to write it around the idea of a serverless architecture. It may be worse than that, and that you have to target amazon or microsoft's architecture.
In either case, you are potentially backing yourself into a corner when you develop a new app.
Yes, someone has to be the test hamster for these new technologies.. i just can't see a good reason to tell any of my clients that we should go this way.
I don't think the serverless fad will last long as currently incarnated on AWS. It is too freaking slow for many tasks to have a lambda function that potentially needs an environment spun up (cold start). If the function has enough concurrency it is guaranteed there will be cold starts. If it has too much concurrency then it may just fail because you went over some provider determined quota.
In comparison I can and have created servers that could stand up to 1000 hits/sec per process. And every single one of them responding quicker than with lambda functions, even ones in warm start.
On top of this there is quite a bit of new tooling and deployment dictating best coding practices and modularization within the software development environment. That is usually considered a bad idea. Many of the frameworks are geared really to one language and don't do too well with others. For instance serverless framework is written in node and doesn't do a great job for python based lambda functions that have dependencies on other projects that may be local to an organization. It has no concept of or support for a python virtualenv.
And how the heck do a reason about a pure serverless backend whose each component can take anywhere from 50 ms in overhead latency to many seconds on any random invocation that hits it? I don't know how to do this.
Seems like you're talking about FaaS and not serverless in general.
I know Lambda is seen as the very incarnation of serverless, but actually it's just a tiny part of it and many services have serverless properties (S3, API-Gateway, DynamoDB, AppSync, Cognito, etc.) and those usually don't have the latency problems you mention.
What I am talking about is the serverless hype that everything should be serverless in the app stack. That is clearly highly problematic. For a fuller read on this I recommend theregister.co.uk/2018/12/19/serve...
which points to a good study from UCB at www2.eecs.berkeley.edu/Pubs/TechRp....
I am not at all against these various services. What I am against is yet another set of baseless solve everything silver bullet claims. I have seen organizations go so overboard on "no servers of our own" that they produce much more complicated and inflexible stacks requiring understanding of a dozen different technologies or so just for the plumbing bits and not having to do much in house. This on a project that would have been fairly trivial to do most of in a more conventional manner at lower money, time, aggravation, lost opportunity cost. They are not alone.
It's just a quesion of risk distribution.
If you think it's harder to learn these technologies and easier and more cost efficient to manage your own infra, could be that you're right. I know many people who think that way.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.