Skip to content
loading...

Why Serverless

Yos Riady on February 15, 2018

Web applications are traditionally deployed on web servers running on physical machines. As a software developer, you needed to to be aware of t... [Read Full]
markdown guide
 

There are pros and cons.

One of the growing challenges with serverless computing is monitoring and debugging all those functions. Logging context is now scattered across even more components than even in a microservice architecture. May of the logs are very vendor specific and is not like looking at standard NGINX or HaProxy logs.

But we are still very excited about growth in this space. Interesting that you wrote a book on this topic. Congrats.

 

Yes, absolutely. Monitoring and debugging remains a challenge in serverless because for one
you can't ssh into a running lambda. Fortunately, the serverless landscape is maturing: services such as AWS X-Ray, Iopipe, Dashbird, and a few others have emerged to help solve the visibility problem: they help you see inside your lambda functions.

As serverless continues to grow and mature, I expect we'll see more solutions in this space.
Thank you for reading!

 

Great read!

I think that leveraging external services and APIs are a KEY THING here - it enables you to focus just on your business logic, and FaaS enables you further to not even manage the server your code is running on.

Regarding monitoring - at Epsagon (epsagon.com), we are focusing on automatic end-to-end monitoring of the ENTIRE architecture, rather than of a single Lambda - which we found out is the main challenge in serverless today. Feel free to contact us and try it out our beta.

 

One of the growing challenges with serverless computing is monitoring and debugging all those functions.

Yes, true, but things are improving. Check out serverless.com/blog/serverless-mon...

Logging context is now scattered across even more components than even in a microservice architecture.

Checkout AWS X-Ray. Its improving.

May of the logs are very vendor specific and is not like looking at standard NGINX or HaProxy logs.

You can redirect the logs to Splunk or an ELK stack from any of the leading FaaS providers.

 

This is a good point, but I feel this is a solvable problem. Surely there must be a way to unify the logs of several microservices somewhere? But I feel this needs to be provided by the platform where your code runs. Like a periodic upload of logs to a central data warehouse

 

lol. I'll do my own plug.... Since most serverless and microservices are APIs, if we capture all the data at the API level from all these different sources, and analyze them together can solve this problem. Check out the company that I started. Moesif. (moesif.com)

 

Great read, but if I can add one important point- when it comes to serverless, you are responsible for your own security, and not the platform provider. So make sure to have it covered. This is an important read: puresec.io/blog/serverless-top-10-... it covers the top 10 security issues in serverless architectures.

 

Serverless seems great to me. But it can be very expensive.

 

It really depends on what the traffic pattern of your application looks like. For most web applications, traffic can be unpredictable. Instead of paying for idle compute time during low traffic periods (where you waste money on unused compute resources), you can simply pay-what-you-use with serverless: the infrastructure only exists when there is an incoming request.

 

The best analogy is ZipCar vs. Buy a car.

Owning or leasing a car is cheaper than ZipCar if you compare the hourly rates.

But, if you only use car occasionally, then owning a car (or even leasing a car) get more expensive, since it is sitting around doing nothing a lot of the time.

 
code of conduct - report abuse