Most of the points here sounds like a result of bad communication and organization, and all of them could surface without Serverless as well. Having teams that build stuff without communicating on certain standards are bound to build services that don't interact very nicely with each other, regardless of if they are running in a Lambda, container or VM.
Depending directly on other teams functions?
Stop doing that, and expose each individual service as an API. Then it does not matter whether there are lambdas, containers or even VMs behind the API. Care must be taken to not break the APIs, and that needs to be done regardless of what kind of compute you are using.
10 different ways to authorize
Also boils down to communication between teams. If all services are exposed as APIs, it would be preferable to use a common auth mechanism for them. Here you could even centralize that function to a team and let them be in charge of an authorizer that other teams can use in API Gateways. For other services that does not run behind API gateway, expose the authorizer functionality as an API that can be called from inside containers in an API middleware or similar.
Account chaos
More an organizational issue. I wouldn't want to log in to a console where 20 teams have 4 EC2 instances each either. Use AWS Organizations and automate creation of accounts, preferably one account per service (or some similar scope).
DNS migration
Same goes for ALB, NLB and other services.
leading engineers to spend most of their time with YAML configuration
I am currently an enterprise DevOps Engineer and Co-Founder/CTO at Go Carrera. My focus is to
enable engineers to build secure and scalable services with high confidence and reliability.
Hey there, Elias, appreciate the feedback! There is no doubt that team communication is critical, serverless or otherwise. My larger point was that focusing on business objectives (that move business forward) rather than technical ones should be the priority of every productive team.
Additionally, I probably could have been more clear that it doesn't matter whether a developer is hitting an API or lambda directly. The testing difficulty of serverless means many developers and teams are first testing in their dev and test environments. This means my functions/apis may start suddenly failing in dev because another team is making a change to their api, which is now broken for whatever reason. Again, this is because it is very hard to impossible to test anything locally.
Sounds like your dev teams never heard of blue-green release patterns.
It sounds like your organization REALLY needs to consult with cloud native practitioners. Your organization will continue to fail at the cloud until it does and makes changes on how it communicates and enforces best practices.
I totally agree on the part that testing distributed systems, where different teams are responsible for different services, is very hard. And I also agree on that all the serverless offerings still need a bit of work to mimic it perfectly locally. However, I feel that this problem would still be there even if the compute layer was running on something else.
How we solved it at my previous place was that every team had a development environment, where they experimented and stuff was not expected to always work. Other services were mocked where needed. We then had a staging and production deploys of every service where all services were expected to be stable, with integration suites testing and verifiying that the actual business usecases (often spanning multiple services) was working as expected.
Since all contracts between teams where defined by APIs, it didn't really matter what was running underneath. Some teams used API Gateway + Lambdas extensively, others used ALB + Fargate, and some teams used NLB + EC2.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Most of the points here sounds like a result of bad communication and organization, and all of them could surface without Serverless as well. Having teams that build stuff without communicating on certain standards are bound to build services that don't interact very nicely with each other, regardless of if they are running in a Lambda, container or VM.
Depending directly on other teams functions?
Stop doing that, and expose each individual service as an API. Then it does not matter whether there are lambdas, containers or even VMs behind the API. Care must be taken to not break the APIs, and that needs to be done regardless of what kind of compute you are using.
10 different ways to authorize
Also boils down to communication between teams. If all services are exposed as APIs, it would be preferable to use a common auth mechanism for them. Here you could even centralize that function to a team and let them be in charge of an authorizer that other teams can use in API Gateways. For other services that does not run behind API gateway, expose the authorizer functionality as an API that can be called from inside containers in an API middleware or similar.
Account chaos
More an organizational issue. I wouldn't want to log in to a console where 20 teams have 4 EC2 instances each either. Use AWS Organizations and automate creation of accounts, preferably one account per service (or some similar scope).
DNS migration
Same goes for ALB, NLB and other services.
leading engineers to spend most of their time with YAML configuration
Kubernetes says hello.
Hey there, Elias, appreciate the feedback! There is no doubt that team communication is critical, serverless or otherwise. My larger point was that focusing on business objectives (that move business forward) rather than technical ones should be the priority of every productive team.
Additionally, I probably could have been more clear that it doesn't matter whether a developer is hitting an API or lambda directly. The testing difficulty of serverless means many developers and teams are first testing in their dev and test environments. This means my functions/apis may start suddenly failing in dev because another team is making a change to their api, which is now broken for whatever reason. Again, this is because it is very hard to impossible to test anything locally.
As I mentioned, appreciate the feedback!
Sounds like your dev teams never heard of blue-green release patterns.
It sounds like your organization REALLY needs to consult with cloud native practitioners. Your organization will continue to fail at the cloud until it does and makes changes on how it communicates and enforces best practices.
Hello there! Great answer! :)
I totally agree on the part that testing distributed systems, where different teams are responsible for different services, is very hard. And I also agree on that all the serverless offerings still need a bit of work to mimic it perfectly locally. However, I feel that this problem would still be there even if the compute layer was running on something else.
How we solved it at my previous place was that every team had a development environment, where they experimented and stuff was not expected to always work. Other services were mocked where needed. We then had a staging and production deploys of every service where all services were expected to be stable, with integration suites testing and verifiying that the actual business usecases (often spanning multiple services) was working as expected.
Since all contracts between teams where defined by APIs, it didn't really matter what was running underneath. Some teams used API Gateway + Lambdas extensively, others used ALB + Fargate, and some teams used NLB + EC2.