DEV Community

Discussion on: Cryptographically protecting your SPA

Collapse
 
matpk profile image
Matheus Adorni Dardenne

As I wrote, both the application and the API are already protected with certificates. The hacker exported the certificate from his browser and imported it into his tool. The API believed that the requests coming from his tool were from his browser, and his browser believed the responses coming from his tool were from the API. And he could change anything he wanted basically using a find-replace. I suggest you take a look at the Burp Suite, even though it is a paid tool.

Only using TLS/SSL is not enough to prevent manipulation of the data.

Collapse
 
dc10 profile image
Darren Crossley

Only using TLS is exactly enough to prevent manipulation of the data - that's basically it's whole purpose. :)

If you read carefully either using the blurp browser or installing their CA into your existing browser is a requirement to make this kind of attack work: portswigger.net/burp/documentation... - at which point you've basically completely circumvented TLS and all it's benefits.

You seem to be under the dangerous illusion client side code can't be tampered with - but this is simply not the case if you have a compromised (willingly or not) client.

Or to put it another way if a user or attacker can intercept your api traffic and modify it, surely the same attack vector can be used to intercept your client side code and modify it to remove any additional validation function you may add? Or the attacker can simply duplicate your client side code and remove any function that way - It's also a mistake to assume access-control-allow-origin would prevent this kind of thing - access control is only designed to protect the browser and relies on the browser to implement this to the specification (and if the client is compromised / malicious all bets are off) - it can even simply be disabled on many browsers through a simple toggle or registry edit in much the same way as a root CA can be installed. Again: as a basic rule any client side security feature can be disabled if the client is untrustworthy.

All is to say-- you should consider client side code already compromised; and adding additional validation such as this is simply a pretty trivial non-standard security mechanism that duplicates the already sufficient security of TLS and serves no real additional security other than some easily bypassed obscurity.

Time and energies would be better spent on hardening your apis, fuzzing and code reviews. This is the painful fact but this is where it counts - and finding the time and budget to do this over the long term is where most teams and companies mess up. Of course quick wins and stupid mistakes like disabling mock / initialisation endpoints are always good to check but it's a mistake to assume a client side function will prevent an attacker from finding an unprotected api or a misconfigured server rule.

Adding server side protection to protect access to some browser code can be a good idea, but again it's a mistake to rely on this, as a determined hacker will simply attempt requests based on the logical structure of your apis endpoints (and completely randomising your api behaviour isn't really viable for most sensible teams or products!). If you have a create user route, even without any client side code calling it an attacker will likely guess it's location and format it will then likely get an error message to confirm it's found the right route and then attempt to post any and all data to it in a format consistent with your application.

Spend your time protecting api endpoints, especially the high value ones like creating accounts and key transactions as beyond the basic mistakes this will be where your most critical vulnerability is outside of some external factor.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

I agree 100%. Whenever you design any protocols, you should never trust client for anything. If you want to pass some data through the client, you have to use e.g. HMAC-SHA256 and sign the data before it reaches the client and check the data after you receive it from the client. If you need to prevent replay attack, you have to include a nonce to the data covered by the HMAC signature and you have to keep track of already seen nonces.

If you need to pass data from multiple trusted parties (e.g. trusted server operated by 3rd party) you can use public key encryption to reduce the amount of keys but that doesn't reduce the requirement to have the environment generating the message trusted.

If you generate the message in the untrusted client and sign or encrypt it in that client, that client can generate any message it wants because clients cannot be used.

The client code must assume that it can trust the server and it does it by verifying that the TLS connection is fully complete and the domain name is the expected one. In case of HTML5 this is implemented with server distributing the source code (HTML+CSS+JavaScript) to the client using public CA signed certificates. The public CA signed certificate is not the only way to do this but it's the path of least resistance given the existing client software already installed on the client system. Avoiding CA signed certificates and using self-signed certificate would improve security if you can pre-install the certificate as trusted on all client systems.

And the fact that attacker can see that some kind of admin user interfaces do exist doesn't matter because all the data and commands to actually use those admin interfaces is checked by trusted code running in trusted environment, the server.

The old saying says that if the attacker has physical access to your server, it isn't your server anymore. The same applies to the client hardware and that's why you never ever trust the client.

Some people keep asking for DRM and there are dishonest sellers selling you DRM "solutions" which pretend to make the client trustworthy. That's only smoke and mirrors and it depends on owner of client devices believing that DRM exist. You can use TPM chips and other implementation tricks to make clients harder to manipulate but you cannot fully prevent clients from being modified by the attacker.

Unfortunately, DRM cannot exist even in theory because it basically requires Alice being able to send a secret message to Bob without Eve being able to see or modify the message. And a fully functioning DRM would require that Bob and Eve are the same person! That's impossible for very simple reasons but DRM believers and sellers think otherwise.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

"Only using TLS is exactly enough to prevent manipulation of the data [...] "installing their CA into your existing browser is a requirement to make this kind of attack work"

Only that, as explained in this article, it is not. It prevent's data being manipulated by third-parties eavesdropping the communication but does NOT prevent the end-user himself to manipulate the data. I think you're failing to see that the potential attackers in this case are otherwise legitimate users. The application deals with the employees bonus, so they have motivation to attack from the inside.

"at which point you've basically completely circumvented TLS"

Yes. Hopefully you can understand that your sentence literally means "TLS alone is not enough".

"other than some easily bypassed obscurity"

We hired professionals to "bypass it" and they said it was "sufficiently secured for now". And its not like this was "obscurity", since we thoroughly explained the mechanism to them before their attempt.

"Time and energies would be better spent on hardening your apis"

Hardening is a iterative process of improvement, that we never stopped and will never stop doing, but it is definitely not an "either-or" with closing other vectors of attack. All it takes is one gap in the armor, so closing gapping holes like the one described in this article is extremely cost effective. This was relatively quick to implement, and sufficiently closed this critical vector of attack for now.

Thread Thread
 
dc10 profile image
Darren Crossley

Thank you for the extended reply :)

"It prevent's data being manipulated by third-parties eavesdropping the communication but does NOT prevent the end-user himself to manipulate the data"

True, but this is the fundamental nature of client-server systems. You are never going to be able to trust the client and nothing you could add will change this. Nothing can prevent the end-user himself from manipulating the data or your frontend code - they are the owner of their client system and can never be trusted (as you have discovered they can be the attacker). Any client side security you may try to add to circumvent this fundamental fact can simply be disabled because as a user, I can do anything on my system up to the limits of rolling my own compromised CA / browser / OS.

What makes you think adding a extra function to your source code you send to a compromised client will prevent that user from editing that exact same JS code to simply remove such a function? The only solution to such a problem would be to secure every computer you wish to consume your api with a secret key that is 100% isolated from the users of the system and could be used to decrypt your signed code before it was run on their computer. You would also have to prevent this secret from ever being read, as well as the decrypted code from being extracted after decryption. This is largely considered a pointless and impossible pursuit to undertake even in the cases where you have complete control of a system, or a propriety protocol such as in a large corporation or for closed platforms like app-stores, bluray etc. To attempt this using open standards and uncompiled, unsigned JS code is simply not possible.

The best you can do is to go down the route of signing your JS, but this is basically called TLS as it is protected the integrity of your source code. - "The principal motivations for HTTPS are authentication of the accessed website and protection of the privacy and integrity of the exchanged data while it is in transit. It protects against man-in-the-middle attacks"..." en.wikipedia.org/wiki/HTTPS

All your professional has done is remove this security the entire internet relies upon and manipulated some api calls by getting in the middle of what would otherwise be a secure channel. There is no logical protection to this kind of attack because he has compromised the client (which is basically why they can't class it as a MITMA -- you're not in the middle of a secure communication you've replaced 1/2 of the system to make the whole thing unsecure).

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I don't see a reason why "you are never going to be able to trust the client" should be translated to "let the client-side application be easy to break since it is impossible to make it impossible to break".

I think that if our employees are having to develop their own browsers to break into our application, they deserve the extra bucks they'll hack into their bonuses lol

The fact that the same professionals were unable to remove this security afterwards calmed down the people upstairs. Any attack of this sort is non-trivial at this point.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

"You are never going to be able to trust the client and nothing you could add will change this."

I totally agree. The point is that you don't trust the client but you check if the command that the client did send is allowed to be executed by the credentials used for the session that submitted the command.

If the attacker has taken control of the client system after the session has been initialized, there's nothing you can do about that. Adding public key encryption on top will not help.

However, a client system controlled by the user who has logged in with correct credentials is not a problem as long as you don't trust any logic executed on the client. And if you don't trust any logic on the client, you don't need to sign anything by the client.

The communication between the client and the server is protected by the TLS with gives secrecy and authenticity guarantees for the client (assuming no client certificates as is typical). As a result, you provide service from the server to clients and clients connect using TLS connection and then pass data that is used to identify the session and the command. Then trusted environment (server) verifies if the data is valid for the session (e.g. session has not expired) and then trusted environment (server) verifies if the given session is allowed to execute the requested command. None of this requires any trust on any logic on the client.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

"I think that if our employees are having to develop their own browsers to break into our application, they deserve the extra bucks they'll hack into their bonuses lol"

You shouldn't design or implement "security" which depends on lack of skills of the user base. Have you heard about GTP-4? That's only a start. And hopefully the business prefers to hire dumb people only to allow the "security" to work.

If you want to prevent the employees from giving themselves extra bonuses, the only correct way to avoid the security vulnerability is to compute the actual action ("give bonus X to person Y") in a trusted environment only, namely on server. Then the only question is who is the current session owner and does that session have required capabilities to grant the bonus. No amount of client modifications can bypass that check.

If you do something else, you have to be brutally honest and say that there's no security but obscurity by security – as in key under the doormat, absolute safe as long nobody notices or guesses it. And make sure to communicate this to the decision makers, too. Sometimes that may be enough but it shouldn't be mixed with real security.

Public key encryption is designed for use case where you want to send messages over untrusted medium and do not want to handle connection specific encryption keys. It cannot fix the problem where the message sender (client logic) is untrusted. And signing or encrypting the message after it has been generated in untrusted environment will not make the message trustworthy.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I am kind of confused by your reply.

The messages are not generated in untrusted environment. They are generated and signed in our trusted server. The client side can't sign messages. I think you missed something in the article.

Also, this is not an either-or. Continuous improvement of back-end security is not something you stop doing. Ever. Neither will we stop. The first action we took was fixing the API and doing a sweep on other endpoints.

However, as pointed out by the professional pentesters, this IS a problem, a critical problem, and as I can see from some of the replies to this article, a very ignored problem. People are way overconfident in their ability to perfectly secure their backend; as I was "pretty sure" we secured ours.

The majority of potential attackers will try to break something for a few hours or days, fail, and give up. This is protection (as opposed to security, I guess).

Imagine not putting a padlock on your locker because you know all locks can be picked by a sufficiently skillful lockpicker with sufficient time. What the padlock does is both raising the bar (a majority of people won't even try, and a majority of those who try will fail) and giving you time (if the lock takes 5 minutes to pick, you now have 5 extra minutes to react to the thief). Time we are using now to implement measures such as fuzzing (recommended to me in another response in this article) that will improve the strength of the back-end.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

Yeah, it seems like I've misunderstood something if you create the signatures on the server. However, if the server creates the signature using private key and the client is verifying the data using the public key, how does this improve anything over simply sending the data over TLS connection?

As I understood the article, it seemed like the client was signing the data using the public key and the server was verifying the results using its private key. That would be an unsafe protocol.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

The hacker used a specialized tool to bypass TLS connection (for himself only) and manipulate the responses from the server.

What we do is verifying the signature from the server (made with the private key) on the client (verified with the public key), and reject the data if it doesn't match.

As others pointed out, this doesn't make it impossible to manipulate the data (some suggesting things that...... aren't possible, which made me take what they say with a grain of salt), but the pentesters concluded it is sufficiently secure for now. For now being the keyword, they'll come back later this year, and I'll try to provide some follow up on what went down.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

Why do you bother verifying the server signed data on the client if the data come through TLS connection? The attacker that can modify the TLS connection can also change the computed results of that verification.

Do you have some reason to believe that the client software would be intact but the attacker can MITM the TLS connection? I'm asking this because the way you describe the signature seems like this is the only attack that your method would prevent. All the situations I can think of allow modifying the client logic if TLS connection is not safe either.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

If he tries to change the response from the API, the verification will fail, he can't fake a signature for the modified data because he only has the public key.

There are other mechanisms in place, such as SRI and CSP to name two, to help mitigate the attacker's ability to modify the source files (they were there for different reasons, but they helped during the second round of attacks where the hackers failed to break in after two days).

Mitigate being the keyword here, we are aware that they can puzzle their way into disabling those as well.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

Both SRI and CSP depend on TLS for their security so if you don't trust TLS, you cannot trust SRI or CSP either. (This is because both SRI and CSP are optional features which are enabled with the data passed over TLS. If you think TLS is not safe, you cannot expect to be able to successfully pass the data to enable these features either.)

I have major trouble understanding the exact vulnerability class you're trying to combat here. Do you think TLS is safe or not?

And yes, CSP with the reporting feature turned on may help catch less skilled attackers while they try to attack the system. A skilled attacker will use tools that have CSP and SRI checks disabled so they will never trigger. As an alternative, they may be using setup where CSP and SRI do trigger but never leak that result to remote server.

It appears to me like you're thinking that you can trust the client (browser engine) but you cannot trust TLS. It doesn't seem like a reasonable assumption to make. For all cases where TLS can be bypassed the server submitted client logic can also be modified at will. For example, you can use the Burp Suite to also remove SRI and CSP from the headers and HTML just fine. You can also replace your custom JS code in place of the server provided code. Even a good adblocker such as uBlock Origin can do this.

Calling this setup mitigation instead of obfuscation seems incorrect to me. Typically mitigation would be about reducing the effects of a successful attack (e.g. sandboxing) and obfuscation is about making the attack harder without actually preventing it. This blog describes an obfuscation method, if I've understood it correctly.

Had the blog post been titled "Using public key encryption to obfuscate SPA client logic" or "Smoke and mirrors: DRM implementation for your SPA" I would have no problem because then the post wouldn't give false impression what's actually happening.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne • Edited

I hope you're able to see how your objections prove my point when they all start with "a skilled attacker". A skilled attacker can hack NASA.

You would understand the exact vulnerability if you would read the article again with the renewed understanding of our exchanges. The hackers said that the ability to effortlessly interact with admin controls was what allowed them to find vulnerabilities in minutes instead of several days as it takes now.

They recommended that mitigating this was critically important.

Also, your definitions are... a bit off. An example of obfuscation would be changing the "isAdmin" property to something like "hadhau1863an", so that the attacker wouldn't know what it is from simply looking at it. The purpose of the attribute would be >obfuscated<, so implementing something like Fractal as a security measure would be obfuscation.

Putting a wall around your castle is not obfuscation. Yes, it doesn't make it impossible for sufficiently experienced climbers to get in, if they have enough time to climb before we knock them down (the time it takes the attacker to get in is time we are finding and patching vulnerable endpoints), but it does protect the castle against the majority of attackers.

This measure wasn't designed against professional hackers (even though it helped against them in discernible ways) but against curious fiddlers, who are the likely attackers, since company employees are the only ones with access to the application.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

I would argue that putting a wall around your castle is similar to obfuscation because it assumes that the attacker is moving on the ground. Whenever you're building secure software, you should start with the assumption that the attacker does the best move, not the move that is easy to prevent. This is not different from e.g. playing chess: if you make a move and opponent can make 5 moves of which 4 mean that you win the game and one means that you'll lose the game, you'll not win the game with 80% probability.

And yes, I used expression "a skilled attacker" to refer any attacker that is not blinded by the obfuscation a.k.a. smoke and mirrors. It seems like a pretty low bar for me, but I used word "skilled" to leave out script-kiddies.

Collapse
 
mtrantalainen profile image
Mikko Rantalainen

How does public key encryption help when the message/command is generated by client? Remember that all clients are untrusted by definition because the attacker controls the hardware. Clients have all the data and keys you send to them and may or may not follow any logic you submitted to the client.

You cannot generate trusted data in untrusted environment so it doesn't matter if you then encrypt or sing that client generated now-untrusted data.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I think you got it backwards.

The message is generated and signed in the API.

I know they have access to any key we send them, that's why we only give them the public key, they can't sign messages with the public key, so they can't fake the data.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

If the API (trusted server) signs the data, why do you need a signature at all? Wouldn't TLS already provide all the authenticity you need? The client can verify the connection (TLS + domain name) to the trusted server and anything it receives from the TLS protected connection is trusted.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I explain in the article that the attacker is able to bypass TLS by installing his certificate on his tool.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

Yes, and that only affects that specific client. And as the client is always untrusted anyway, that doesn't change what the server can or should do.

If you run a service that sends HTML+CSS+JS to the client to implement the interface, you should think that as default implementation of the client and the end user not installing TLS bypass allows the end user to trust that he or she is actually running the default client implementation. The TLS connection is a guarantee to the end user that he or she is running the original data and software provided by the server.

TLS connection cannot prevent the client from running non-standard implementation (that is, executing some other logic than the default implementation provided by the server). And using public key encryption running on client hardware cannot prevent that either! That's the whole point. The only way you could pretend to prevent client from running non-default logic is some kind of DRM implementation, which cannot exist even in theory because it would be similar thing to perpetual motion matchine.

You can pretend to have a working DRM implementation similar to pretending you have a perpetual motion machine. If that's what you want to do, fine. But never ever think that it's a real thing or real security.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

"Yes, and that only affects that specific client"

It doesn't have to affect other clients. I understand what you're saying, but it really doesn't apply to what the article is about. I think you're missing the point made by the pentesters: they marked this ability to easily manipulate responses as critical and recommended preventing it because it was the only reason they were able to break-in in the first place.

You also seem to be mistaking "security" for "protection" (and "protection" is what is claimed in the article). You don't put a padlock in your locker for "security", since any sufficiently skillful lockpicker with sufficient time will be able to break in. You put it for "protection". The majority of potential attackers won't even try to pick the lock, and the majority of potential attackers who try will fail, and even so, the time it will take for the lockpicker to pick it open can be enough for you to catch the thief on the act.

So the silly objections like "but this doesn't do anything because the attacker can roll his own CA, create his own browser, run it on his own operating system, running in the hardware he hand-made in his garage" are not properly objections to the solution implemented.

If you simply leave your locker without a padlock, people will open it and take your stuff. Big surprise.

Thread Thread
 
mtrantalainen profile image
Mikko Rantalainen

The reason people use e.g. pin tumbler padlocks is either ignorance or cost. For software, implementing the correct stuff (that is, checking capabilities/permissions on server) requires about the same effort as doing it incorrectly (running trusted logic in untrusted environment, e.g. client).

My point is with the effort spent on "protection" you could have also implement real security instead. If you already had the incorrect implementation, sure, it requires more work to fix the whole implementation.

This "protection" will make attack a bit more complex but it cannot prevent it, unlike real security which requires doing the correct implementation.

(And yes, in case of digital security, you could argue that the attacker than brute force e.g. AES-128 encryption but physicists would then argue that the total energy needed would exceed the total energy of the Sun over its whole lifetime. That's much better level of security than the best mechanical lock you can get. And if you want high quality mechanical lock, the best options I've aware of are "Abloy Protec" and "Kromer Protector" safe lock. And of those, unmodified Abloy Protec has actually been picked in real life but that's really really hard. I know of three people in the whole world that can pick Abloy Protec.)

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

"will make attack a bit more complex but it cannot prevent it"

Then it serves it's purpose. I don't buy the argument "the effort spent on it would've been more useful elsewhere", because the effort to implement this was miniscule compared to the hundreds of hours already spent on implementing security measures on the API, and the hundreds (or maybe thousands) more that will take to make it technically impenetrable.