Fargate gives us best of both the worlds (serverless as well as container) and it won’t restrict us as Lambda might. Lambda works well when run in asynchronous manner (event driven) say object was uploaded to S3 and some Lambda got triggered as a result. API Gateway integrated with Lambda for request/response model especially for runtime like dot-net (for cold-start) can be problematic for synchronous traffic like web APIs in case milliseconds matter.
Lambda is not multi-threaded so say if Lambda service receives 10 requests then Lambda service will launch 10 Lambdas running our code (ASP.NET entry point), some of those instances might not be warmed up or ready. That’s where (web traffic, request/response synchronous) Fargate shines as its serverless and it takes those 10 requests and run them on the same container so there is no overhead of launching the lambda container impacting performance.
We do need to keep in mind that there is a learning curve for containers, perhaps ECS is less of a learning curve than EKS but operationally we need a team which knows Kubernetes well. When we run Kubernetes in EKS we are still talking about authentication with IAM and AWS's ELB, VPC and perhaps we integrate with S3, DynamoDB and at that point we are already committed to AWS. Hybrid cloud is good if we are running different applications on different cloud providers but not the same application such that at any given moment we can pick it up and drop it in Azure, Openshift OnPrem, GCP unless we build the application to the lowest common denominator of all the cloud providers and don’t take any advantage of any of the good things cloud providers have to provide. In other words if we just use block storage, http APIs and we ran our own databases on VMs we are fine but then we are not taking advantage of the cloud native features that any cloud provider might provide you. That’s typically not the case.
Other consideration is Kubernetes operates in quarterly fashion for releases, only supports n-2 i.e. two revisions behind current version. For larger companies that doesn’t adapt to changes very well, we need to keep this in mind as there will be breaking changes as Kubernetes gains more maturity.
Another consideration especially for bigger workloads (and enterprises who are frugal wrt IP's like FA) is the IP availability and the networking security model.
Lambda can reuse the ENI for a given combination of security group +subnet and if we are looking to adhere to the principle of least privilege 100's of Lambdas could be reusing a security group which will make the security group become more permissive. Same thing for ECS in Fargate mode with one IP address per task. So for a workload needing 200 IP's per AZ that equates to 400 IP's per environment.
EKS and ECS (with EC2 mode) is more streamlined in this regard as depending upon the networking model (say Calico, Kube-router, Romana, Weave-net for EKS) the pods are running with fake IPs which get NAT'ed back to fewer IP's on our network and we are relying on network policies based on matching pod labels to dictate inter-pod communication as opposed to security group(s)