If you are working with a startup, you know that it is a race against time and money.
I have been evaluating a lot of serverless technologies for our clients who are looking to build applications with serverless components. Now that there is a new project ahead of us, we are wondering if it is a perfect fit for our requirements or not.
Since it’s a simple app but has huge potential to scale, we are thinking to go for microservices or continue with monoliths? The application is just going to perform user authentication, registration, push notifications and standard CRUD operations, however, it might have a CRON in the background sending updates to the app.
Here are some of the questions we are looking answers for:
As per my experience, service-based architecture adds a lot of complexity and communication overhead. If we have a team of 100 engineers, we can divide the project into small pieces that can be worked upon by a manageably sized team. Since the initial project team is small, I am not sure microservice will be a good fit. However, we are looking forward to good scalability options. What is your experience?
-
Serverless seems to be a new kid on the block and the use cases are still fit into a bit of a narrow band. Also, I am particularly not sure about serverless performance and how it operates in a development environment for rapid test/prototyping. How about cold-hot startup time and concurrency?
Liquid error: internal
Drew Firment@drewfirment@RohitAkiwatkar increasing the size of allocated memory not only increases performance (and costs), but also the consistency and predictability of your functions bit.ly/2CHepKC21:19 PM - 02 Jan 2018 But most importantly we envision faster time to market and are looking forward to improving our MVP by circling back around and ironing out the imperfections and operational issues. How good is serverless for this?
Looking forward to the views of Dev community. Let me know what do you think about this case. Also, if there is anything I should be considering, please mention in the comment section.
Latest comments (19)
If you're building an MVP as per #3 then you don't have the issues cited in #1. What I like about serverless for a startup is you've offloaded part time jobs that your product devs would have been doing causing them to be distracted from product dev work, and given that work to AWS.
Also, if you have a monolith currently and don't have scalability problems, then you could keep it and build features to acquire users so you actually develop a scalability problem. If you're in a race against time and money and you have investors, your investors don't care about your awesome scalable engineering. They care that you have users.
I wouldn't necessarily say one approach is better than the other - it really depends on your team, needs, requirements, budget, etc. However, there are a few key differences which may help inform your decision:
You don't manage the infrastructure
Pros: no servers to patch or update; servers won't crash; most FaaS will essentially give you auto-scaling for free
Cons: you don't have much control over runtimes and languages which can be used; custom binaries or dependencies can be difficult to implement or require kludgy workarounds
Local development usually requires emulators, or using cloud services
Pros: using live cloud services gives you dev/prod parity; emulators exist as well
Cons: using live cloud services requires internet connectivity and can get messy to manage with a large team; emulators aren't great (IMHO)
You must design your system with release management in mind
Not necessarily any pros or cons here, it's just a much different approach when designing serverless/microservice architectures versus monoliths. Serverless frameworks can add an additional layer of abstraction and/or complexity for deployments
FWIW, I'm a fan of Firebase, and based on the vague description of your project :-p it sounds like it might get you what you need (Auth, CRUD, Functions/Cron, Notifications)
For just-comments.com/ I only use the serverless tech by AWS, and I must admit that it's not perfect.
First thing I noticed is that there is an overhead for every operation because many systems are involved (CloudFront -> API Gateway -> (VPC) -> Lambda -> DynamoDB). So the cold-start case is almost not suitable for a consumer-facing API. Therefore, I had to implement a CloudWatch event which warms up my important functions.
The second thing (at least with NodeJS) is that it is quite hard to predict if you code will work after deployment. So you will most likely keep a separate test environment in the cloud to be able to test properly. To summarize: there is a different set of problems you will be solving if you use FaaS. Sometimes it feels like something that was easy to accomplish with normal servers is now a hard task.
Nevertheless, I chose to use serverless because it matches the pricing model for my project, which is also based on the amount invocations that the users perform. Also, I believe the availability, stability, and scalability of the system is much higher than if I would do it myself. To provide the same level, I would probably need more development effort and constant monitoring.
I wouldn't use a CRUD app in serverless
Reasons
Learning to write code to a completely new environment
Serverless needs a lot of maintenance compared to a crud app. Versioning and constant updates requiring a lot of time consuming
AWS Lambda may be cheap . But API Gateway is not .
If you think about google functions , they are still in beta
Other options would be IBM Open whisk or Azure functions
These are my opinions about serverless apps .
You can still use it if you like . But I wouldn't do it for a simple crud app
It is the same runtime on the FaaS provider. The environment has restrictions but it is the same runtime. No new learnings.
I respectfully disagree. Its the perspective I guess. Maintaining a lot of functions which do one and only thing or one huge blob of monolith that does a lot. If you are worried about maintainace, please give Serverless Framework (serverless.com/framework/) a try. It will not disappoint you.
Compared to maintaining a fleet of infrastructure? And the team of devops that have to keep an eye on it to keep it up, patched and running?
I think you could make serverless work for what you need, but it would depend upon how comfortable everyone was with it. Microservices sound like overkill, you can always create services out of the monolith as you go.
With a serverless architecture you should be able to tie into a queue/event system to handle the cron and other asynchronous events. Depending upon the intended growth of the application you might want to look into using a provider's authentication mechanisms like Amazon's Cognito or the identity provider that is part of Google's Firebase. Those can just get expensive after a certain point.
Recently I have started to use a library/framework called serverless to work with AWS Lambda and Google Cloud Functions. It makes things fairly easy to deploy and lets you get going quickly.
Simplest thing with serverless (FaaS etc.) is a pay per use model. You pay for a function call, so do your users.
Other models (servers, containers) are better suited for monthly subscriptions, since you also pay them if they aren't used.
Best thing is of cause they pay you per month, you pay per use and they don't use it (often), haha.
I am trying to implement precisely this pay per use model with my current project. And I also think this is the most crucial reason why FaaS is better suited for this model.
I think with any technology, the more pieces, the more complexity you have. In a way, serverless is a form of microservice.
Personally, I love the idea of FaaS/serverless. But it can add so many points of failure that you end up creating more work than necessary for an MVP.
In general, find the right tool for the job. Some will be functions on Webtask (a personal favorite of mine), Azure or AWS, some of it will be an app in Ruby.
True serverless technology due to the microservice architecture can add complexity to the application.
The granularity of each microservice plays a huge role. Small numerous functions are difficult to handle and I can't code a function which has execution time more than 5 minutes (for AWS Lambda).
A little more on webtask.io - if you have javascript developers, you can very quickly try out some functions there for MVPs.
It is really easy to use, and the free tier is quite functional for experimentation.
Webtask looks really cool! (I wish I could have come to that conclusion a bit faster. Their landing page is vague)
Incredibly low-key. Fun to use and setup is fast.
If I were building this app, I'd go with a Ruby on Rails app and get this all out of the box, and I'd host it on Heroku. Cache (in memory and at CDN layer) for scaling concerns. Plus maintenance will be simple because so many capable devs and documentation. (I like Rails, but there are similar equivalents in different languages)
Serverless is really cool for a few reasons, but I'm struggling to see it being the right fit from what you described. Your comments about it being a decent idea for 100 engineers but not for 10 seem to agree.
Thanks for the suggestions Ben!!
For the MVP, I agree that using a RoR with Heroku architecture will work well.
Also, AWS has expanded its language support for Lambda to 4 languages - Node.js, Python, Java, C#, and Go.
I agree with Ben.
I would probably do parts of an app as a serverless/lambda set of functions. For example: photo processing or video transcoding but the app itself, in this case, might be overkill do be done in a totally serverless mode.
If you decide to go the Rails / Heroku route I suggest the following resources which have been very helpful to me:
Thanks for the links, some of these posts are amazing! They came at the right time as I'm refactoring a huge Rails app with high response times.
Yeah. We have a few Lambda functions sprinkled in which we can mock in test and/or dev. We also use some SaaS services with about the same pattern. It definitely pays to have good habits around wrapping services and then working with them like any part of your app.
You are immediately taking on monitoring and possible deployment complexity with these services but if done right I think it's a helpful approach.