DEV Community

Discussion on: Cryptographically protecting your SPA

 
mtrantalainen profile image
Mikko Rantalainen

"You are never going to be able to trust the client and nothing you could add will change this."

I totally agree. The point is that you don't trust the client but you check if the command that the client did send is allowed to be executed by the credentials used for the session that submitted the command.

If the attacker has taken control of the client system after the session has been initialized, there's nothing you can do about that. Adding public key encryption on top will not help.

However, a client system controlled by the user who has logged in with correct credentials is not a problem as long as you don't trust any logic executed on the client. And if you don't trust any logic on the client, you don't need to sign anything by the client.

The communication between the client and the server is protected by the TLS with gives secrecy and authenticity guarantees for the client (assuming no client certificates as is typical). As a result, you provide service from the server to clients and clients connect using TLS connection and then pass data that is used to identify the session and the command. Then trusted environment (server) verifies if the data is valid for the session (e.g. session has not expired) and then trusted environment (server) verifies if the given session is allowed to execute the requested command. None of this requires any trust on any logic on the client.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

"I think that if our employees are having to develop their own browsers to break into our application, they deserve the extra bucks they'll hack into their bonuses lol"

You shouldn't design or implement "security" which depends on lack of skills of the user base. Have you heard about GTP-4? That's only a start. And hopefully the business prefers to hire dumb people only to allow the "security" to work.

If you want to prevent the employees from giving themselves extra bonuses, the only correct way to avoid the security vulnerability is to compute the actual action ("give bonus X to person Y") in a trusted environment only, namely on server. Then the only question is who is the current session owner and does that session have required capabilities to grant the bonus. No amount of client modifications can bypass that check.

If you do something else, you have to be brutally honest and say that there's no security but obscurity by security – as in key under the doormat, absolute safe as long nobody notices or guesses it. And make sure to communicate this to the decision makers, too. Sometimes that may be enough but it shouldn't be mixed with real security.

Public key encryption is designed for use case where you want to send messages over untrusted medium and do not want to handle connection specific encryption keys. It cannot fix the problem where the message sender (client logic) is untrusted. And signing or encrypting the message after it has been generated in untrusted environment will not make the message trustworthy.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I am kind of confused by your reply.

The messages are not generated in untrusted environment. They are generated and signed in our trusted server. The client side can't sign messages. I think you missed something in the article.

Also, this is not an either-or. Continuous improvement of back-end security is not something you stop doing. Ever. Neither will we stop. The first action we took was fixing the API and doing a sweep on other endpoints.

However, as pointed out by the professional pentesters, this IS a problem, a critical problem, and as I can see from some of the replies to this article, a very ignored problem. People are way overconfident in their ability to perfectly secure their backend; as I was "pretty sure" we secured ours.

The majority of potential attackers will try to break something for a few hours or days, fail, and give up. This is protection (as opposed to security, I guess).

Imagine not putting a padlock on your locker because you know all locks can be picked by a sufficiently skillful lockpicker with sufficient time. What the padlock does is both raising the bar (a majority of people won't even try, and a majority of those who try will fail) and giving you time (if the lock takes 5 minutes to pick, you now have 5 extra minutes to react to the thief). Time we are using now to implement measures such as fuzzing (recommended to me in another response in this article) that will improve the strength of the back-end.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

Yeah, it seems like I've misunderstood something if you create the signatures on the server. However, if the server creates the signature using private key and the client is verifying the data using the public key, how does this improve anything over simply sending the data over TLS connection?

As I understood the article, it seemed like the client was signing the data using the public key and the server was verifying the results using its private key. That would be an unsafe protocol.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

The hacker used a specialized tool to bypass TLS connection (for himself only) and manipulate the responses from the server.

What we do is verifying the signature from the server (made with the private key) on the client (verified with the public key), and reject the data if it doesn't match.

As others pointed out, this doesn't make it impossible to manipulate the data (some suggesting things that...... aren't possible, which made me take what they say with a grain of salt), but the pentesters concluded it is sufficiently secure for now. For now being the keyword, they'll come back later this year, and I'll try to provide some follow up on what went down.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

Why do you bother verifying the server signed data on the client if the data come through TLS connection? The attacker that can modify the TLS connection can also change the computed results of that verification.

Do you have some reason to believe that the client software would be intact but the attacker can MITM the TLS connection? I'm asking this because the way you describe the signature seems like this is the only attack that your method would prevent. All the situations I can think of allow modifying the client logic if TLS connection is not safe either.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

If he tries to change the response from the API, the verification will fail, he can't fake a signature for the modified data because he only has the public key.

There are other mechanisms in place, such as SRI and CSP to name two, to help mitigate the attacker's ability to modify the source files (they were there for different reasons, but they helped during the second round of attacks where the hackers failed to break in after two days).

Mitigate being the keyword here, we are aware that they can puzzle their way into disabling those as well.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

Both SRI and CSP depend on TLS for their security so if you don't trust TLS, you cannot trust SRI or CSP either. (This is because both SRI and CSP are optional features which are enabled with the data passed over TLS. If you think TLS is not safe, you cannot expect to be able to successfully pass the data to enable these features either.)

I have major trouble understanding the exact vulnerability class you're trying to combat here. Do you think TLS is safe or not?

And yes, CSP with the reporting feature turned on may help catch less skilled attackers while they try to attack the system. A skilled attacker will use tools that have CSP and SRI checks disabled so they will never trigger. As an alternative, they may be using setup where CSP and SRI do trigger but never leak that result to remote server.

It appears to me like you're thinking that you can trust the client (browser engine) but you cannot trust TLS. It doesn't seem like a reasonable assumption to make. For all cases where TLS can be bypassed the server submitted client logic can also be modified at will. For example, you can use the Burp Suite to also remove SRI and CSP from the headers and HTML just fine. You can also replace your custom JS code in place of the server provided code. Even a good adblocker such as uBlock Origin can do this.

Calling this setup mitigation instead of obfuscation seems incorrect to me. Typically mitigation would be about reducing the effects of a successful attack (e.g. sandboxing) and obfuscation is about making the attack harder without actually preventing it. This blog describes an obfuscation method, if I've understood it correctly.

Had the blog post been titled "Using public key encryption to obfuscate SPA client logic" or "Smoke and mirrors: DRM implementation for your SPA" I would have no problem because then the post wouldn't give false impression what's actually happening.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne • Edited

I hope you're able to see how your objections prove my point when they all start with "a skilled attacker". A skilled attacker can hack NASA.

You would understand the exact vulnerability if you would read the article again with the renewed understanding of our exchanges. The hackers said that the ability to effortlessly interact with admin controls was what allowed them to find vulnerabilities in minutes instead of several days as it takes now.

They recommended that mitigating this was critically important.

Also, your definitions are... a bit off. An example of obfuscation would be changing the "isAdmin" property to something like "hadhau1863an", so that the attacker wouldn't know what it is from simply looking at it. The purpose of the attribute would be >obfuscated<, so implementing something like Fractal as a security measure would be obfuscation.

Putting a wall around your castle is not obfuscation. Yes, it doesn't make it impossible for sufficiently experienced climbers to get in, if they have enough time to climb before we knock them down (the time it takes the attacker to get in is time we are finding and patching vulnerable endpoints), but it does protect the castle against the majority of attackers.

This measure wasn't designed against professional hackers (even though it helped against them in discernible ways) but against curious fiddlers, who are the likely attackers, since company employees are the only ones with access to the application.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

I would argue that putting a wall around your castle is similar to obfuscation because it assumes that the attacker is moving on the ground. Whenever you're building secure software, you should start with the assumption that the attacker does the best move, not the move that is easy to prevent. This is not different from e.g. playing chess: if you make a move and opponent can make 5 moves of which 4 mean that you win the game and one means that you'll lose the game, you'll not win the game with 80% probability.

And yes, I used expression "a skilled attacker" to refer any attacker that is not blinded by the obfuscation a.k.a. smoke and mirrors. It seems like a pretty low bar for me, but I used word "skilled" to leave out script-kiddies.