DEV Community

Discussion on: Cryptographically protecting your SPA

Collapse
 
phlash profile image
Phil Ashby • Edited

Good write-up of a real world security issue - thank you!

I think it's worth saying that BurpSuite cannot silently intercept TLS secured web traffic (ie: anything using https), a default browser will issue a security alert unless the user has installed a special certificate. This means that in the real world, users on default browsers are very unlikely to see any problems with your original app.

As the attacker was able to learn about your API (which they will always have the ability to do using their own tools) they could probe that to find the actual weaknesses. This is something your own in-house security testing can do in CI of course - testing both a 'happy path' and all permutations & boundary conditions for parameters (can be generated by tooling, as used by the pen-tester - no need to manually work all these out!), plus if you haven't fuzzed your public APIs, you should ;-)

I'm interested to know why you thought it so important to prevent the display of 'admin' controls in the UI through response tampering? The resources and logic for them is already present on the user's system and thus discoverable by interested / malicious parties even if they cannot be activated. The server side will no longer honour invalid requests if they are issued, and unless the user has modified their browser (as above), they will not be subject to any MITM tampering that could display the controls. It seems you may have spent lots of effort extrapolating new risk from the pen test report that didn't mention UI issues?

Collapse
 
kbirgerdev profile image
Kirill Birger

You pretty much took the words out of my mouth. It seems like all that was really necessary was to fix the APIs that were improperly secured.

Collapse
 
matpk profile image
Matheus Adorni Dardenne

The problem is that "fix the APIs that were improperly secured" doesn't mean much. Sure, we fixed that endpoint and a couple of others after that, but we can't opperate in damage-control mode. We don't know all the insecurities that we don't know, and this is why we called the ethical hackers in the first place.

They're the experts and pointed out that this was a common vector of attack and a critical issue that needed to be fixed, I am just the developer who was tasked with fixing it. They said that being able to easily explore and modify the UI leads to security breaches in minutes, because it is very easy to overlook use-cases that "should" never happen.

Now automated "fuzzing" seems to be a good thing to implement and continuously improve upon, but the issue was critical, now it is solved, and we can implement fuzzing without fear of an attacker breaking our application in minutes.

Collapse
 
matpk profile image
Matheus Adorni Dardenne

I thank you for your time reading it and leaving a very informative response.

I'm not sure how the hacker set up everything on his side, but he did mention configuring the certificate on his tool.

I'll bring up "fuzzing" to the rest of the team on our next sprint planning. Thanks!

When the team debated the report, we came to the conclusion that the exposure of the UI controls could turn the whole application into a playground for a malicious agent to quickly and easily find ways to wreak havoc. It gave visual and interactive cues about how the application works, without having to look at a single line of code.

This is why the attacker managed to break things in a matter of minutes. After that implementation, he fidgeted with the system for a few days and came up with nothing new.

But I think the major reason is that we didn't want to worry about what could go wrong if the user could change what the API is saying to the application. As you said, we extrapolated potential risks out of fear of the unknown.

Collapse
 
phlash profile image
Phil Ashby

Ah ok - raising the bar above the trivial to discover threshold 😁

Collapse
 
kbirgerdev profile image
Kirill Birger

Still, unless your application is doing something that's on the level of national security, it seems like a cost benefit analysis should show that obfuscating the UI in order to mitigate discovery is just not worth it.

In my opinion, the time would be better spent on even more thorough investigation of the backend to make sure that it does not matter what an attacker could do on your front end.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

The application is used to calculate a yearly bonus paid to company employees based on their performance so there is motivation for a potential attacker to mess around trying to get a personal advantage.

Also, the information available for admins in the system is very sensitive. We can't risk users figuring out ways of seeing things they shouldn't.

We analyzed the impact this had on performance and we concluded it had no impact, if that is what you mean by cost-benefit.

About "thorough investigation of the backend", yes, but this is "CI&CD" stuff, constant iteration and improvement, we don't know yet what we don't know, and we can't risk it.

For example, one of the points in the report, that I didn't mention in the article, is that the attacker managed to mess around with our filter feature and figured out a way to override the backend standard filters that limit visibility of the data by access-level. He used a fake admin access in the browser and managed to see some restrict data because of his ability to change the request in ways we never designed the application to handle.

Its always "obvious" after a hacker explains how he broke in, but you know you can't be sure that a creative and motivated attacker won't find these bugs and break your app faster than you can find them and patch them. This uncertainty made us conclude that we should play it on the safe side and block this vector of attack first and fast, and then we investigate the API. Its not either-or.

Thread Thread
 
kbirgerdev profile image
Kirill Birger

No, I meant a cost benefit analysis of the amount of time it would take to address this issue on the front end compared to just hardening your backend.

I am also referring to the maintenance cost of supporting the added complexity on the front end.

My philosophy on this is that a motivated attacker will always find a way to extract info from your front end, so it's a lost cause.

I also echo the other comments about how the attack vector mentioned here is probably not a realistic one to exploit on a VICTIM'S machine

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

Well... it took 2 days to address this on the front-end, mostly because I had never done it before. I could probably implement this in 15 minutes now, with the repository I created to "store" this knowledge. Recently I've found the repository "jose js", that would've saved me even more time.

Securing an API is not a "task", it is a constant, never-ending process. "Hardening" the backend takes years and it is not enough alone, since all it takes is one gap in the armor.

About maintenance cost increase; we have a function that handles all HTTP requests, and added the verification step to that function. It doesn't impact anything else, really. The whole application is working as expected, as if nothing changed. This is not a breaking change and caused no shockwaves.

And I understand your philosophy, however, it wouldn't work on our case. The application deals with money and very sensible information. That's plenty of motivation for even a regular company employee to become a potential attacker. We can't afford to allow it to be easy. The attacker will have to be VERY motivated, because even specialists failed to break in after this was implemented.

This doesn't mean they can't find another way, but as they said, it is "sufficiently secured for now", and this calmed down the people with the money.

Yes, because there is no "user victim". The "victim" in this case would be the company. An employee trying to escalate his access to affect his bonus, for example.