DEV Community

Arpit Mohan
Arpit Mohan

Posted on • Originally published at insnippets.com

The good & bad of serverless and how to improve Mongo performance

TL;DR notes from articles I read today.

The good and bad of serverless

The good

  • It’s truly scalable & saves you from the pains of managing servers manually.
  • Serverless applications are a notch above Virtual Private Servers - you only pay for what you need.
  • Developers on your team don’t have to deal with the technicalities of setting up scaling policies or configuring load balancers, VPCs, server provisioning, etc.

The bad

  • Cold starts when a function has been idle. To solve it, ping your functions periodically to ensure they’re always warm or set up a single function to handle all API calls in order to ensure that cold-starts only happen once.
  • The need for applications to be truly stateless. You must design your application to be ready to serve a request from a cold, dead state.
  • Not ideal for long-running jobs. Re-examine whether the time limit hinders your ability to process all the data or try using Lambda recursively.

Full post here, 9 mins read


Improving Mongo performance by managing indexes

  • You can query large collections efficiently by defining an index and ensuring it is built in the background.
  • To define an efficient index, you can build on top of a previously defined index as well. When you are compound indexing in this way, determine which property of your query is the most unique and give it a higher cardinality. This higher cardinality will help in limiting the search area of your query. 
  • To ensure your database uses your index efficiently, ensure the index fits in the available RAM on your database server as part of Mongo’s working set. Check this using the db.stats().indexSize and determining your default allocation of RAM.
  • To keep index sizes small, examine the usage of indexes of a given collection and remove the unused ones, examine compound indexes to check whether some are redundant, make indexes sparser by imposing a $partialFilterExpression constraint to tell them which documents to use, and minimize fields in compound indexes.

Full post here, 9 mins read


Why API responses should be signed

  • As a recipient of any data, you want to know who originally published it and be sure it was not tampered with to establish authenticity. This can be achieved by adding signatures to validate messages.
  • One option is to keep the signature and the message separate, requested by different API calls, to reduce complexity for the server so that it only makes the second call if the user demands it. Storage can be complicated with this approach.
  • The second option is to include the signature with the message, which you must encode first, but that renders the response no longer human-readable and the response must be decoded for interpretation.
  • A third option is to sign only critical parts of the response rather than all the metadata. This is easiest to implement, simple to parse for both humans and computers, but sometimes the metadata itself may be important information to verify.
  • In all the above options, the API provider must securely manage cryptographic keys, which is expensive and complicated, and the API can be compromised if a hacker gets hold of the keys.
  • To solve the problem effectively, you could checkout JOSE. It is a suite of specifications, including JSON web tokens which are already used across the internet mostly to sign OAuth logins.


Full post here, 5 mins read


Get these notes directly in your inbox every weekday by signing up for my newsletter, in.snippets().

Top comments (0)