DEV Community

talham7391
talham7391

Posted on

Anyone having authorization problems?

The past 3 companies I've worked for have all had issues with their authorization solution. Too expensive to work with, too slow, etc...

I'm have to fix the authorization yet again for my current company.

Is this a thing at most medium - large size companies? I'm curious to learn about other people's experiences.

Top comments (3)

Collapse
 
mistval profile image
Randall • Edited

Yes. This is my first web-heavy job, but have seen and been involved in a number of serious issues involving auth:

With firebase auth, we had serious issues with malicious actors:

  1. While firebase auth does rate limit login attempts on each individual account, it does not rate limit you if you try to log into many different accounts. You can try to log into a list of 100,000 accounts from one IP in a short period of time and they won't stop you. We ended up with massive credential stuffing attacks going on and there wasn't a lot we could do about it on firebase auth.
  2. Related to the above^ but firebase auth DOES have a global rate limit for your application. If I remember right it's 180,000 login attempts per minute. It's a lot, but the credential stuffing attacks got so bad that we were hitting this and it was making our auth systems unavailable for users. Essentially, firebase auth became an avenue for DDoS attacks against us.

We ended up switching to Auth0 which did at least stop our auth systems from becoming unavailable, but:

  1. They are extremely expensive
  2. Their reCAPTCHA implementation is broken and easy to bypass
  3. Their proposed solution for stopping credential stuffing attacks was "make all of your users enable 2FA". Not necessarily the worst solution, but I wish they'd just fix their reCAPTCHA implementation. Ultimately we ended up implementing a new-IP verification flow ourselves.
Collapse
 
jeremyf profile image
Jeremy Friesen

One of my former coworkers, whom I deeply admire, has been programming since the early 80s. And a common sentiment he's seen in application development is: "We'll add proper authorization later."

And I suspect this is a function of exploring an emerging domain (e.g. the application your building) is already complicated, now overlay that mindset/model to describe that authorization. The exercise authorization, by it's nature, is complicate.

Who uses the application? What all can you do? What is the logical groupings of the actions folks can take? What is the logical grouping of the objects under action? What is the logical grouping of folks? And what about composing larger groups from sub-groups?

I took my coworkers advice when I started building a new application, and focused on the interfaces.

Fundamentally we have the following questions:

  • Can the given user take the given action on the given record?
  • Who all can take the given action on the given record?
  • Given the user and the action what all records can they act upon?
  • For the given user and record what all actions can they take?
  • For the given record what all actions are available?

And I ended up writing a module interface that would handle those questions. (There might be a few more permutations on the user, action, record querying.)

The end result was my Rails controllers became easy to test; I had a clear crease for test stubbing (via dependency injection) authorization. And I had a consistent place to bombard the implementation details of the "policy" layer.

Ultimately I moved all of this into the database and remain quite happy with the extensibility of that application's permission system.

Collapse
 
nikfp profile image
Nik F P • Edited

Just a thought from a part time / hobbyist dev, but I have been looking into using Cloudflare's KV stores as a distributed session cache. Take it with a grain of salt. My thinking is this:

  • Cloudflare has a global presence, so latency values would be consistent worldwide
  • Requests would proxy through Cloudflare and a worker would inspect the request.
  • If it's a sign in or sign up request, do the auth work right in the worker, then forward the request with a signed auth header to the actual endpoint, as well as executing the write operation to KV.
  • All other requests hit the worker and then validate against a session token in a cookie as the key for KV, and the value can be anything you want including user specific info, usage statistics, whatever. Again, add a signed auth header and forward to the endpoint.
  • The worker interacts with the KV store. Since a first request from a client is likely to route the same as subsequent requests, and KV writes to the executing data center and then uses a ~60 second eventual consistency pattern to propagate globally, following requests hitting the store cache should be quick since they most likely will hit the same data center again, and session info should be there. BUT access is still possible globally on the same session within 60 seconds, in the event that traffic routes differently or the client is on a mobile connection that is jumping across access points.
  • My logic is that I want users to have a good experience from everywhere, and this pattern would allow a consistent experience globally without having to provision caches in multiple data centers. KV has a global read latency of around 10-20 ms, and then any latency to forward to your servers and come back again. I'm less concerned about write latency since this is usually sign up and sign in operations and I'm OK with those taking a bit longer than in-session fetch requests.

There are other global cache services available as well, and Upstash comes to mind. Not sure how your traffic patterns and volume would interplay with this, but it's a (possibly naive) option that might fall within your budget and SLO's.