Credits to https://blog.1password.com/what-is-public-key-cryptography/ for the cool image.
TL;DR:
Check this repository for a simple example in NextJS of how to achieve this. Reading the article is recommended, though, for context on why this is useful. Don't forget to give a star to the repository đ.
Disclaimer
Despite having worked as a software engineer for the past decade, I am not a cryptographer and am not a cybersec specialist. Iâm sharing this from the perspective of a developer who was tasked with fixing a bug. I recommend doing your own research on the subject, and always inviting ethical hackers to pentest your applications. Always rely on experts when it comes to security.
Introduction
Recently the application Iâve been working on for little more than a year went through a âpentestâ (Penetration Test, where hired ethical hackers will try to invade your application and report your weaknesses, so you can fix them. This is a very useful tactic for cybersecurity). It was the first time this system was put through such a procedure.
The System
The system is comprised of a front-end SPA built with ReactJS, and a back-end API built with Node.JS. As a software engineer with some 10 years of experience under my belt, I designed both to be resistant to the usual culprits.
- SQL Injection;
- XSS;
- CSRF;
- DoS;
- MITM attacks;
I wonât focus on those, but I recommend you to extensively research any of the above terms youâre not familiar with. I was confident, but I was in for a wild ride.
The Report
All of these security measures were praised on the final report. However, there was one attack that was able to get through; a particular form of man-in-the-middle attack that allowed the hacker to escalate his access level.
The application itself is protected using SSL certificates on both ends, so the data was pretty secure while in transit. However, the hacker used a specialized tool called Burp Suite to set up a proxy on his machine using the certificate on his browser. This proxy routes the network requests to and from the tool, and makes both ends believe it is legitimally coming from each other. This allowed him to modify any data he wanted.
The Attack
He could effectively fake what the API was sending back to the browser, or fake what the browser was sending to the API. So it isn't exactly a... man... in the middle. It wasn't a third-party stealing or changing the information, but is was still a new layer in between that allowed for an attacker to do things the application probably isn't expecting him to be able to do, and this can break things.
I have never seen such an attack before. I didn't even think this was possible. My fault, really, as the hacker said this is a very common vector of attack of SPAs, which must rely on information passing through the network to determine what the user can see and do (such as showing up a button that only an admin should see, for example).
From there, all the hacker had to do was figure out what-is-what in the responses to make the browser believe he was an admin (for example, changing an "isAdmin" property from "false" to "true"). Now he could see some things he wasnât supposed to see, such as restricted pages and buttons. However, since the back-end validates if the person requesting administrative data or performing administrative actions is an admin, there wasnât much he could do with this power... we thought... that was until he found a weakspot.
It was a form that allowed us to quickly create new test users. It was a feature no normal users were supposed to ever see, and one that was supposed to be removed after development, so we never bothered protecting it, and since the body of the request was specifically creating a "normal user", we never stopped to think about the security implications. It was never removed, we forgot about it.
Then the hacker used the proxy to modify the body of the request, and managed to create a new user with true admin power. He logged in with this new user and the system was in his hands.
I know, it was a bunch of stupid mistakes, but are all your endpoints protected? Are you SURE? Because I was âpretty sureâ. Pretty sure is not enough. Go double-check them now.
The Debate - Damage Control
Obviously, the first thing we did was deleting his admin account and properly gating the endpoint he used to create the user, requiring admin access and preventing it from accepting the parameters that would give this new user admin access. Turns out we still needed that form for some tests and didn't want to delete it just yet. We also did a sweep on other endpoints related to development productivity to confirm they were all gated behind admin access, and fixed those that weren't.
The Debate - SSR?
The cat was out of the bag. We needed a solution. We still had to prevent attackers from seeing pages and buttons they weren't supposed to see. Moving the whole React app to a NextJS instance was considered, so we could count on the SSR for processing the ACL. Basically, we would check the components the user should be able to see on the server side, this information would not be sent through the network, so it couldnât be faked. This is likely the best approach to solving this, and it will be done in the near future, but that will be very time-consuming (and isn't always viable) and we needed a solution fast.
The Debate - What would the solution even look like?
So, we needed a way to verify that the message sent by the API was not tampered with. Obviously we needed some form of cryptography. Someone suggested HMAC, but the message couldnât simply be encrypted using a secret shared on both sides, because since the hacker had access to the source code on his browser, he could easily find the secret and use it to encrypt any tampered response, so something like HMAC (and pretty much any form of symmetric cryptography) was out of the gate. I needed a way to sign a message on one side, with the other side being able to verify that the signature is valid, without this other side being able to sign a message.
The Debate - The solution
Then we realized: this sounds a lot like the public-private key pair, like the ones we use for SSH! We will have a private key that stays on the environment of the API, which we will use to sign the response, and a public key that is compiled in the front end to verify the signature. This is called asymmetric cryptography. BINGO! We would need to implement something like RSA keys to sign and verify the messages. How difficult could it be? Turns out⊠very difficult. At least if you, as me then, have no idea how to even start.
The implementation - Creating the keys
After hours of trial and error, using several different commands (such as using ssh-keygen
and then exporting the public key to the PEM
format), I managed to find the commands that create the keys properly. Iâm not a cryptographer and canât explain in detail why the other commands I tried were failing later in the process of importing the keys, but from my research I could conclude that there are several different âlevelsâ of keys, and the ones used for SSH are not the same âlevelâ as the ones created by the working command.
These are the ones that worked.
For the private key:
openssl genrsa -out private-key-name.pem 3072
For the public key:
openssl rsa -in private-key-name.pem -pubout -out public-key-name.pem
You can change the number of bits in the first command, they represent the number of bits that the prime numbers used in the algorithm will have (which is a gigantic number), but keep in mind that you will have to change some other things later.
As a rule of thumb, more bits = more security but less speed
.
The implementation - The Back-end
Implementing this on the back-end was very straightforward. NodeJS has a core library named crypto
, that can be used to sign a message with few lines of code.
I wrote a simple response wrapper to do this. It expects an input that looks something like this:
{ b: 1, c: 3, a: 2 }
And its output will look something like this:
{
content: { b: 1, c: 3, a: 2 },
signature: "aBc123dEf456"
}
But I immediately ran into problems, which Iâll quickly go through, as well as briefly explain how I solved them.
- When you stringify javascript objects into JSON, they donât always keep their âshapeâ letter-to-letter. The content remains the same, but sometimes, properties appear in a different order. This is expected behavior for JSON and is documented in its definition, but if we are going to use it as a message to be signed, it MUST be equal, letter to letter. I found this function that can be passed as the second argument to
JSON.stringify
to achieve exactly what we need; it orders the properties alphabetically, so we can count they will always be stringified in the correct order. This is what the function looks like.
export const deterministicReplacer = (_, v) => {
return typeof v !== 'object' || v === null || Array.isArray(v) ? v : Object.fromEntries(Object.entries(v).sort(([ka], [kb]) => {
return ka < kb ? -1 : ka > kb ? 1 : 0
}))
}
const message = JSON.stringify({ b: 2, c: 1, a: 3 }, deterministicReplacer)
// Will always output a previsible {"a":3,"b":2,"c":1}
- Just to avoid dealing with quotes and brackets, that were causing headaches due to sometimes being âescapedâ in some situations, resulting in different strings, I decided to encode the whole stringified JSON into base64. And this worked initially.
Buffer.from(message, 'ascii').toString('base64')
- Later I had problems because I was reading the encoding of the input string as ASCII, turns out that if the message contains any character which takes more than 1 byte to encode (such as an emoji or bullet point), that process would produce a bad signature that the front-end was unable to verify. The solution was using UTF-8 instead of ASCII, but this required modifications to how things were being processed in the front end. More on this later.
Buffer.from(message, 'utf-8').toString('base64')
This is what the final working code for the back end part looks like:
import crypto from 'crypto'
import { deterministicReplacer } from '@/utils/helpers'
export const signContent = (content) => {
const privateKey = process.env.PRIVATE_KEY
if (!privateKey) {
throw new Error('The environmental variable PRIVATE_KEY must be set')
}
const signer = crypto.createSign('RSA-SHA256')
const message = JSON.stringify(content, deterministicReplacer)
const base64Msg = Buffer.from(message, 'utf-8').toString('base64')
signer.update(base64Msg)
const signature = signer.sign(privateKey, 'base64')
return signature
}
export const respondSignedContent = (res, code = 200, content = {}) => {
const signature = signContent(content)
res.status(code).send({ content, signature })
}
The implementation - The front-end
The plan was simple:
- Receive the response with the content and the signature.
- Deterministically stringify the
content
(using the samedeterministicReplacer
function we used in the back-end). - Encode it in base64 as an UTF-8 string, just like in the backend.
- Import the public key.
- Use the public key to verify this message against the signature in the response.
- Reject the response if verification fails.
I searched around for libraries like crypto
for the front-end, tried some of them, but in the end came up empty-handed. It turns out this library is written in C++, and canât run on the browser, so I decided to use the native Web Crypto API, which seems to work well on modern browsers.
The code for steps 1-3 is quite long and uses a few nearly unreadable functions I found around the internet and then modified and combined in a way to normalize the data in the format that is needed. To see it fully, I recommend going directly to the files rsa.ts and helpers.ts.
For steps 4-5, I studied the WCAPI docs to figure out that the function to import the public key expects the data to be in the form of an ArrayBuffer (or others, check docs for reference). The keys naturally come with a header, a footer, and a body encoded in base64 (which is the actual content of the key), this one is encoded as ASCII so we could just use the window.atob
function. We need to strip the header and footer, and then decode it to get to its binary data.
This is what it looks like in code.
function textToUi8Arr(text: string): Uint8Array {
let bufView = new Uint8Array(text.length)
for (let i = 0; i < text.length; i++) {
bufView[i] = text.charCodeAt(i)
}
return bufView
}
function base64StringToArrayBuffer(b64str: string): ArrayBufferLike {
const byteStr = window.atob(b64str)
return textToUi8Arr(byteStr).buffer
}
function convertPemToArrayBuffer(pem: string): ArrayBufferLike {
const lines = pem.split('\n')
let encoded = ''
for (let i = 0; i < lines.length; i++) {
if (lines[i].trim().length > 0 &&
lines[i].indexOf('-BEGIN RSA PUBLIC KEY-') < 0 &&
lines[i].indexOf('-BEGIN RSA PRIVATE KEY-') < 0 &&
lines[i].indexOf('-BEGIN PUBLIC KEY-') < 0 &&
lines[i].indexOf('-BEGIN PRIVATE KEY-') < 0 &&
lines[i].indexOf('-END RSA PUBLIC KEY-') < 0 &&
lines[i].indexOf('-END RSA PRIVATE KEY-') < 0 &&
lines[i].indexOf('-END PUBLIC KEY-') < 0 &&
lines[i].indexOf('-END PRIVATE KEY-') < 0
) {
encoded += lines[i].trim()
}
}
return base64StringToArrayBuffer(encoded)
}
The final code to import the key looks like this:
const PUBLIC_KEY = process.env.NEXT_PUBLIC_PUBLIC_KEY
const keyConfig = {
name: "RSASSA-PKCS1-v1_5",
hash: {
name: "SHA-256"
},
modulusLength: 3072, //The same number of bits used to create the key
extractable: false,
publicExponent: new Uint8Array([0x01, 0x00, 0x01])
}
async function importPublicKey(): Promise<CryptoKey | null> {
if (!PUBLIC_KEY) {
return null
}
const arrBufPublicKey = convertPemToArrayBuffer(PUBLIC_KEY)
const key = await crypto.subtle.importKey(
"spki", //has to be spki for importing public keys
arrBufPublicKey,
keyConfig,
false, //false because we aren't exporting the key, just using it
["verify"] //has to be "verify" because public keys can't "sign"
).catch((e) => {
console.log(e)
return null
})
return key
}
We can then use it to verify the content and signature of the response like so:
async function verifyIfIsValid(
pub: CryptoKey,
sig: ArrayBufferLike,
data: ArrayBufferLike
) {
return crypto.subtle.verify(keyConfig, pub, sig, data).catch((e) => {
console.log('error in verification', e)
return false
})
}
export const verifySignature = async (message: any, signature: string) => {
const publicKey = await importPublicKey()
if (!publicKey) {
return false //or throw an error
}
const msgArrBuf = stringifyAndBufferifyData(message)
const sigArrBuf = base64StringToArrayBuffer(signature)
const isValid = await verifyIfIsValid(publicKey, sigArrBuf, msgArrBuf)
return isValid
}
Check the files rsa.ts
and helpers.ts
linked above to see the implementation of stringifyAndBufferifyData.
Finally, for step 6, just use the verifySignature function and either throw an error or do something else to reject the response.
const [user, setUser] = useState<User>()
const [isLoading, setIsLoading] = useState<boolean>(false)
const [isRejected, setIsRejected] = useState<boolean>(false)
useEffect(() => {
(async function () {
setIsLoading(true)
const res = await fetch('/api/user')
const data = await res.json()
const signatureVerified = await verifySignature(data.content, data.signature)
setIsLoading(false)
if (!signatureVerified) {
setIsRejected(true)
return
}
setUser(data.content)
})()
}, [])
This is obviously just an example. In our implementation we wrote this verification step into the âbase requestâ that handles all requests in the application and throw an error that displays a warning saying the response was rejected in case the verification fails.
And thatâs how you do it. đ
Notes on Performance
We thought this could heavily impact the performance of the API, but the difference in response times was imperceptible. The difference we measured in response times was on average less than 10ms for our 3072-bit key (and the average was a bit less than 20ms for a 4096-bit key). However, since the same message will always produce the same signature, a caching mechanism could easily be implemented to improve the performance on âhotâ endpoints if this becomes a problem. In this configuration the signature will always be a 512-byte string, so expect the size of each response to be increased by that much, however, the actual network traffic increase is lower due to network compression. In the example, the response for the {"name":"John Doe"}
JSON ended up with 130 bytes. We decided it was an acceptable compromise.
The Result
The same ethical hacker was invited to try to attack the application again, and this time, he was unable to. The verification of the signature failed as soon as he tried to change something. He messed around with it for a couple of days and later reported he couldnât break this. The application was declared sufficiently secure... for now.
Conclusion
This works, but I'm not going to lie: not finding comprehensive material on how to do this for this purpose made me question if this is even a good solution. I thought of sharing this mostly as a way to have it analyzed and/or criticized by wiser people than myself, but more importantly, as a way to warn other developers of this attack vector. I also wanted to help others implement a possible solution for this, since it took me a couple of days of trial and error until I was able to figure out how to make everything work together. I hope this saves your time.
All of this has been condensed into a simplified approach in NextJS and is available in this repository.
Please leave a star on it if you find it helpful or useful.
Please feel completely free to criticize this. As I said, I am not a cryptographer or a cybersec specialist, and will appreciate any feedback.
Top comments (134)
Bypassing this as a hacker takes about 5 minutes. The 200 IQ hacker presses F12 then CTRL+SHIFT+F and searches for "verifySignature". Then all you need is to "return true" in the frontend javascript and all of this work serves no purpose, other than increasing the performance and complexity overhead of the entire API (RSA is costly, especially in JS). In the meantime, your API (where the data actually resides) hasn't had any improvement to security. I highly discourage people from implementing something like this.
You're not sure how source files in the browser work, are you? Also, please read the article; improvements to the API were made.
Apparently you're the one that does not understand. When multiple people are saying the same thing which is something you disagree with, maaaybe you should be the one reconsidering it.
I did exactly what Ariel mentioned. I will point you to something interesting to read: developer.chrome.com/blog/new-in-d... - maybe send this to your pentesters as well, but it sounds like we're doing their work.
I hope you can take feedback instead of being angry about someone pointing out your obvious mistakes, specially when you make them with such sarcastic arrogance.
First, obviously multiple people saying something wrong doesn't make it right, so merely having lots of people saying something doesn't automatically make it valuable feedback. I am interested in the content of the feedbacks, and am reading all of them, and answering their questions and pointing out flaws in their objections, when applicable.
So far the majority of negative feedback came from people who proved in their objections that they didn't understand what the article says. A majority of people who did understand provided valuable feedback, such as splitting the admin bits into a different app, fuzzing the API, etc, and agreed with the rationale that led to this implementation.
When professional pentesters say a vulnerability is critical, you better listen. As I said in the article, leave security to the experts.
About your interesting read, thank you for pointing out the all familiar devtools. However, in case you haven't tried before, changing the readable React source code does not automatically compile into a new working file on the Browser. The browser is not webpack. You'd have to change the compiled version. Obviously you're URGING to reply "but the hackers can do that somehow". Yes, they probably can, but this is not trivial. The hired pentesters are much smarter than you or me, they've been doing this for ages. If they didn't break it in two days, it is sufficiently secured for now.
I don't see how this point matters to the discussion. Browser overrides will modify the source before it is executed. As mentioned in the other thread, I've done it using devtools and I can still bypass your protection effortlessly.
Make assumptions about your own intelligence.
You seem to fail to understand that the only thing that got you "secure for now" was securing the critical backend flaw, not the RSA obfuscation you've done here.
I somehow need to prove to you that I understood your article (even though it's the author's responsibility to make it clear), so let me summarize it and then point out why this is not what you think it is:
isAdmin: false
- the flag that informs the client whether it should show "Admin controls" could then be changed toisAdmin: true
by an attacker using a man-in-the-middle tool. The attacker used Burp Suite for this.There are 2 things we can take from this:
The point other people and I are making here is that the client is in control of the user. The user can still set the flag
isAdmin
to true right before the code executes and that has been proved by using a simple code override in Chrome devtools. This does not mean it makes your application more or less secure - but it proves the effort you took to learn and implement response signatures might have been invested into something else. What effectively made your application secure was fixing the server flaw.I don't know how I can be clearer.
So far dozens of people understood the article very well and provided useful feedback. It is you and some other two guys who are bashing your heads against a strawman. The article seems to be clear enough.
The critical vulnerability was the hacker's ability to manipulate the UI as if he was an admin, which allowed him to use a form to create regular users, combined with his ability to spoof the request, to create a user that was itself an admin. This new user had true admin power. Fixing the API was not what made it secure, fixing the API was merely damage control. With the admin controls, finding other vulnerabilities is almost intuitive.
This is what they marked as a critical issue. People are eager to overestimate their ability to protect endpoints against unforeseen scenarios.
"and that has been proved by using a simple code override in Chrome devtools"
By whom?
Fixing the API would have prevented the attack completely. I don't know how the pentesters brainwashed you into thinking it was the other way around, that protecting your Front-end is what actually fixed the security flaw.
I challenge you to host a similar system with the same API flaw but with the signature obfuscation in place and let me break in.
Because "fixing an endpoint" is not the same as "making the API unbreachable". It is even weird that you can't connect these two dots. The hackers would simply find another unexpected way in in minutes.
Clone the repo and do it.
Host it, make it "unreachable" using your method and I will post here whatever you made unreachable by thinking your Front-end is secure.
Make an admin route and I can screenshot it. I'm determined to prove it to you if you give me the means.
I cloned the repo, ran a build locally and it is easily bypassable. There are no dots to connect.
Clone the repo, the implementation is already there and working. It even comes with a sample pair of keys, so all you need to do is install the dependencies and run.
Then prove you bypassed it. You claimed to have posted a screenshot, but I have re-read all my notifications and there are a total of zero screenshots of you breaking in. The time it took you to lie about posting the screenshot was enough for you to take an actual screenshot.
You're not disabling the signature, kid (what you said you could trivially do).
You did not prevent the signature verification. You have to disable the verification and then modify the network response to accurately represent what we're discussing.
What you did simply wouldn't work on a function that deals with all requests, your hardcoded data would instantly break the application.
But that's my fault, I set the bar too low. LoL đ
It still proves my point, which you fail to see.
I see no evidence of what you claim in this screenshot. "John Doe" is the correct data. How does this prove the validation was bypassed?
But it was valuable. Try changing it to "false". If this works, it will probably show the error message.
Working or not (it probably won't, but could, anyway would be nice to know), I expect you learned that someone with technical knowledge responding with a mere attempt after three hours of intently messing around with it (your hurt ego is clearly a strong motivation) is comfortably outside the range of "trivial". Which ultimately proved my point: it is sufficiently secured against the profile of the potential attackers: employees with no tech skills but incentives to fiddle around.
The whole point is you don't need to change the server response. And even if you did, returning
true
from the validation function would work.Again, this took me 5 minutes - it's your terribly inefficient attitude that made this take 3 hours to understand.
If you're assuming your users are not capable of attacking you, why even bothering then? It appears to me you have wasted your time.
The whole point is that you do. As I explained, your other attempt would simply break everything else.
Just checking the times on the notifications from your messages we can clock you out at four hours (at least, since you're been interacting for several days at this point). That with full guidance, since I was here correcting every failed attempt you made, and disregarding the other measures in place. Thanks for taking your time into providing this very useful benchmark and proof of concept.
And I wasn't inefficient at all. I was constantly engaged in our conversation since ~6 in the morning, answering everything you said. If it took you four hours to do this with my constant guidance, then it does what it was designed to do: to protect the UI controls.
They have motivation to try. I'd say the only person wasting my time was you, but you also provided a valuable benchmark for me, so I thank you for that.
How can you be so presumptuous? I really should have let you stay in ignorance and denial but it goes against my principles.
It was a step by step process because you failed to extrapolate my ideas to the full solution. It's partially on me for not explaining them well enough.
I see. Your principles involve writing an article misrepresenting what this article claims trying to make fun of me for the crime of........ shuffles card..... asking for feedback.
You're obviously heavily invested in this. No one likes being disproven, especially with something they're proud of making. But please reconsider your attitude against someone that is trying to help.
You got humbled by technology and facts. I think my article served it's purpose.
Your article proved this measure accomplishes what it was designed to do.
I'm even tired of repeating the phrase "with enough time and effort". And voi la. It took an ego-hurt engineer half a dozen hours to do something that could work, with guidance and disregarding the other measures in place. It is sufficiently secured against our employees.
Not if they see my article đ don't tell them.
I am skeptical they could even if they did read. You made lots of jumps based on knowledge assumptions (things you don't know if other people know). That's probably the whole reason you naively said it was trivial, several hours before actually managing to do it.
As someone else pointed out, this is just security through obscurity at this point.
Putting a padlock in your locker is not obscurity just because a skilled attacker can pick it open if given enough time.
As I responsed to that person, obscurity would be changing the name of the "isAdmin" property to "dhASDuhVNAS132" trying to conceal what it does. So implementing something like Fractal as a security measure would be obscurity.
But OK. Thank you.
Point is you already have a padlock. What you did was to paint "TSA Certified" on it hoping nobody would be attempt to pick it.
"Browser overrides will modify the source before it is executed"
And modifying the source won't compile a new working version. Devtools is not webpack. You'd have to change the compiled version. If you can't see the difference, maybe you're wasting both our times.
And you fail to understand that fixing the backend was merely damage control. With the admin UI, the hacker would quickly find some other unexpected way in. You clearly overestimate your ability to know what you don't know.
"Never discuss with an ignorant. They will get the discussion to their level and beat you with experience."
I'm definitely wasting my time trying to help you understand what is wrong with your thought process. I felt obligated to comment as are are articles like this that hurt security as people will naively think this will protect them of anything and it won't.
Ah, yes, one of those quotes you can turn around 180Âș and they still work perfectly. What will your next argument be? The one about playing chess with a pideon? It is specially ironic, since you're the one leaving before providing evidence of your "trivial break-in". You probably tried and seen it doesn't work as you expected, right? It is likely that with enough time you can figure out a way, but this "enough time" is time I am securing the backend, so by the time you find a vulnerability, it could already have been patched.
And, finally, people will only be hurt by this article if they, as you, are unwilling to read. There is a huge disclaimer before the article starts, and I discuss my skepticism of the solution itself in the conclusion.
Good write-up of a real world security issue - thank you!
I think it's worth saying that BurpSuite cannot silently intercept TLS secured web traffic (ie: anything using https), a default browser will issue a security alert unless the user has installed a special certificate. This means that in the real world, users on default browsers are very unlikely to see any problems with your original app.
As the attacker was able to learn about your API (which they will always have the ability to do using their own tools) they could probe that to find the actual weaknesses. This is something your own in-house security testing can do in CI of course - testing both a 'happy path' and all permutations & boundary conditions for parameters (can be generated by tooling, as used by the pen-tester - no need to manually work all these out!), plus if you haven't fuzzed your public APIs, you should ;-)
I'm interested to know why you thought it so important to prevent the display of 'admin' controls in the UI through response tampering? The resources and logic for them is already present on the user's system and thus discoverable by interested / malicious parties even if they cannot be activated. The server side will no longer honour invalid requests if they are issued, and unless the user has modified their browser (as above), they will not be subject to any MITM tampering that could display the controls. It seems you may have spent lots of effort extrapolating new risk from the pen test report that didn't mention UI issues?
I thank you for your time reading it and leaving a very informative response.
I'm not sure how the hacker set up everything on his side, but he did mention configuring the certificate on his tool.
I'll bring up "fuzzing" to the rest of the team on our next sprint planning. Thanks!
When the team debated the report, we came to the conclusion that the exposure of the UI controls could turn the whole application into a playground for a malicious agent to quickly and easily find ways to wreak havoc. It gave visual and interactive cues about how the application works, without having to look at a single line of code.
This is why the attacker managed to break things in a matter of minutes. After that implementation, he fidgeted with the system for a few days and came up with nothing new.
But I think the major reason is that we didn't want to worry about what could go wrong if the user could change what the API is saying to the application. As you said, we extrapolated potential risks out of fear of the unknown.
Still, unless your application is doing something that's on the level of national security, it seems like a cost benefit analysis should show that obfuscating the UI in order to mitigate discovery is just not worth it.
In my opinion, the time would be better spent on even more thorough investigation of the backend to make sure that it does not matter what an attacker could do on your front end.
The application is used to calculate a yearly bonus paid to company employees based on their performance so there is motivation for a potential attacker to mess around trying to get a personal advantage.
Also, the information available for admins in the system is very sensitive. We can't risk users figuring out ways of seeing things they shouldn't.
We analyzed the impact this had on performance and we concluded it had no impact, if that is what you mean by cost-benefit.
About "thorough investigation of the backend", yes, but this is "CI&CD" stuff, constant iteration and improvement, we don't know yet what we don't know, and we can't risk it.
For example, one of the points in the report, that I didn't mention in the article, is that the attacker managed to mess around with our filter feature and figured out a way to override the backend standard filters that limit visibility of the data by access-level. He used a fake admin access in the browser and managed to see some restrict data because of his ability to change the request in ways we never designed the application to handle.
Its always "obvious" after a hacker explains how he broke in, but you know you can't be sure that a creative and motivated attacker won't find these bugs and break your app faster than you can find them and patch them. This uncertainty made us conclude that we should play it on the safe side and block this vector of attack first and fast, and then we investigate the API. Its not either-or.
No, I meant a cost benefit analysis of the amount of time it would take to address this issue on the front end compared to just hardening your backend.
I am also referring to the maintenance cost of supporting the added complexity on the front end.
My philosophy on this is that a motivated attacker will always find a way to extract info from your front end, so it's a lost cause.
I also echo the other comments about how the attack vector mentioned here is probably not a realistic one to exploit on a VICTIM'S machine
Well... it took 2 days to address this on the front-end, mostly because I had never done it before. I could probably implement this in 15 minutes now, with the repository I created to "store" this knowledge. Recently I've found the repository "jose js", that would've saved me even more time.
Securing an API is not a "task", it is a constant, never-ending process. "Hardening" the backend takes years and it is not enough alone, since all it takes is one gap in the armor.
About maintenance cost increase; we have a function that handles all HTTP requests, and added the verification step to that function. It doesn't impact anything else, really. The whole application is working as expected, as if nothing changed. This is not a breaking change and caused no shockwaves.
And I understand your philosophy, however, it wouldn't work on our case. The application deals with money and very sensible information. That's plenty of motivation for even a regular company employee to become a potential attacker. We can't afford to allow it to be easy. The attacker will have to be VERY motivated, because even specialists failed to break in after this was implemented.
This doesn't mean they can't find another way, but as they said, it is "sufficiently secured for now", and this calmed down the people with the money.
Yes, because there is no "user victim". The "victim" in this case would be the company. An employee trying to escalate his access to affect his bonus, for example.
Ah ok - raising the bar above the trivial to discover threshold đ
You pretty much took the words out of my mouth. It seems like all that was really necessary was to fix the APIs that were improperly secured.
The problem is that "fix the APIs that were improperly secured" doesn't mean much. Sure, we fixed that endpoint and a couple of others after that, but we can't opperate in damage-control mode. We don't know all the insecurities that we don't know, and this is why we called the ethical hackers in the first place.
They're the experts and pointed out that this was a common vector of attack and a critical issue that needed to be fixed, I am just the developer who was tasked with fixing it. They said that being able to easily explore and modify the UI leads to security breaches in minutes, because it is very easy to overlook use-cases that "should" never happen.
Now automated "fuzzing" seems to be a good thing to implement and continuously improve upon, but the issue was critical, now it is solved, and we can implement fuzzing without fear of an attacker breaking our application in minutes.
I failed to understand why you couldn't simply use TLS? If your API has CA signed public certificate, the client only needs to verify the domain name of the connection after the TLS handshake is complete.
Any information from that connection will be sent by the API.
The whole setup about the RSA keys reminds me about the stuff used for SAML protocol and even SAML implementations ultimately trust the TLS instead of the keys, which they still also have to use for historical reasons to be compatible with historical mishaps in the protocol design.
As I wrote, both the application and the API are already protected with certificates. The hacker exported the certificate from his browser and imported it into his tool. The API believed that the requests coming from his tool were from his browser, and his browser believed the responses coming from his tool were from the API. And he could change anything he wanted basically using a find-replace. I suggest you take a look at the Burp Suite, even though it is a paid tool.
Only using TLS/SSL is not enough to prevent manipulation of the data.
Only using TLS is exactly enough to prevent manipulation of the data - that's basically it's whole purpose. :)
If you read carefully either using the blurp browser or installing their CA into your existing browser is a requirement to make this kind of attack work: portswigger.net/burp/documentation... - at which point you've basically completely circumvented TLS and all it's benefits.
You seem to be under the dangerous illusion client side code can't be tampered with - but this is simply not the case if you have a compromised (willingly or not) client.
Or to put it another way if a user or attacker can intercept your api traffic and modify it, surely the same attack vector can be used to intercept your client side code and modify it to remove any additional validation function you may add? Or the attacker can simply duplicate your client side code and remove any function that way - It's also a mistake to assume access-control-allow-origin would prevent this kind of thing - access control is only designed to protect the browser and relies on the browser to implement this to the specification (and if the client is compromised / malicious all bets are off) - it can even simply be disabled on many browsers through a simple toggle or registry edit in much the same way as a root CA can be installed. Again: as a basic rule any client side security feature can be disabled if the client is untrustworthy.
All is to say-- you should consider client side code already compromised; and adding additional validation such as this is simply a pretty trivial non-standard security mechanism that duplicates the already sufficient security of TLS and serves no real additional security other than some easily bypassed obscurity.
Time and energies would be better spent on hardening your apis, fuzzing and code reviews. This is the painful fact but this is where it counts - and finding the time and budget to do this over the long term is where most teams and companies mess up. Of course quick wins and stupid mistakes like disabling mock / initialisation endpoints are always good to check but it's a mistake to assume a client side function will prevent an attacker from finding an unprotected api or a misconfigured server rule.
Adding server side protection to protect access to some browser code can be a good idea, but again it's a mistake to rely on this, as a determined hacker will simply attempt requests based on the logical structure of your apis endpoints (and completely randomising your api behaviour isn't really viable for most sensible teams or products!). If you have a create user route, even without any client side code calling it an attacker will likely guess it's location and format it will then likely get an error message to confirm it's found the right route and then attempt to post any and all data to it in a format consistent with your application.
Spend your time protecting api endpoints, especially the high value ones like creating accounts and key transactions as beyond the basic mistakes this will be where your most critical vulnerability is outside of some external factor.
I agree 100%. Whenever you design any protocols, you should never trust client for anything. If you want to pass some data through the client, you have to use e.g. HMAC-SHA256 and sign the data before it reaches the client and check the data after you receive it from the client. If you need to prevent replay attack, you have to include a nonce to the data covered by the HMAC signature and you have to keep track of already seen nonces.
If you need to pass data from multiple trusted parties (e.g. trusted server operated by 3rd party) you can use public key encryption to reduce the amount of keys but that doesn't reduce the requirement to have the environment generating the message trusted.
If you generate the message in the untrusted client and sign or encrypt it in that client, that client can generate any message it wants because clients cannot be used.
The client code must assume that it can trust the server and it does it by verifying that the TLS connection is fully complete and the domain name is the expected one. In case of HTML5 this is implemented with server distributing the source code (HTML+CSS+JavaScript) to the client using public CA signed certificates. The public CA signed certificate is not the only way to do this but it's the path of least resistance given the existing client software already installed on the client system. Avoiding CA signed certificates and using self-signed certificate would improve security if you can pre-install the certificate as trusted on all client systems.
And the fact that attacker can see that some kind of admin user interfaces do exist doesn't matter because all the data and commands to actually use those admin interfaces is checked by trusted code running in trusted environment, the server.
The old saying says that if the attacker has physical access to your server, it isn't your server anymore. The same applies to the client hardware and that's why you never ever trust the client.
Some people keep asking for DRM and there are dishonest sellers selling you DRM "solutions" which pretend to make the client trustworthy. That's only smoke and mirrors and it depends on owner of client devices believing that DRM exist. You can use TPM chips and other implementation tricks to make clients harder to manipulate but you cannot fully prevent clients from being modified by the attacker.
Unfortunately, DRM cannot exist even in theory because it basically requires Alice being able to send a secret message to Bob without Eve being able to see or modify the message. And a fully functioning DRM would require that Bob and Eve are the same person! That's impossible for very simple reasons but DRM believers and sellers think otherwise.
"Only using TLS is exactly enough to prevent manipulation of the data [...] "installing their CA into your existing browser is a requirement to make this kind of attack work"
Only that, as explained in this article, it is not. It prevent's data being manipulated by third-parties eavesdropping the communication but does NOT prevent the end-user himself to manipulate the data. I think you're failing to see that the potential attackers in this case are otherwise legitimate users. The application deals with the employees bonus, so they have motivation to attack from the inside.
"at which point you've basically completely circumvented TLS"
Yes. Hopefully you can understand that your sentence literally means "TLS alone is not enough".
"other than some easily bypassed obscurity"
We hired professionals to "bypass it" and they said it was "sufficiently secured for now". And its not like this was "obscurity", since we thoroughly explained the mechanism to them before their attempt.
"Time and energies would be better spent on hardening your apis"
Hardening is a iterative process of improvement, that we never stopped and will never stop doing, but it is definitely not an "either-or" with closing other vectors of attack. All it takes is one gap in the armor, so closing gapping holes like the one described in this article is extremely cost effective. This was relatively quick to implement, and sufficiently closed this critical vector of attack for now.
Thank you for the extended reply :)
"It prevent's data being manipulated by third-parties eavesdropping the communication but does NOT prevent the end-user himself to manipulate the data"
True, but this is the fundamental nature of client-server systems. You are never going to be able to trust the client and nothing you could add will change this. Nothing can prevent the end-user himself from manipulating the data or your frontend code - they are the owner of their client system and can never be trusted (as you have discovered they can be the attacker). Any client side security you may try to add to circumvent this fundamental fact can simply be disabled because as a user, I can do anything on my system up to the limits of rolling my own compromised CA / browser / OS.
What makes you think adding a extra function to your source code you send to a compromised client will prevent that user from editing that exact same JS code to simply remove such a function? The only solution to such a problem would be to secure every computer you wish to consume your api with a secret key that is 100% isolated from the users of the system and could be used to decrypt your signed code before it was run on their computer. You would also have to prevent this secret from ever being read, as well as the decrypted code from being extracted after decryption. This is largely considered a pointless and impossible pursuit to undertake even in the cases where you have complete control of a system, or a propriety protocol such as in a large corporation or for closed platforms like app-stores, bluray etc. To attempt this using open standards and uncompiled, unsigned JS code is simply not possible.
The best you can do is to go down the route of signing your JS, but this is basically called TLS as it is protected the integrity of your source code. - "The principal motivations for HTTPS are authentication of the accessed website and protection of the privacy and integrity of the exchanged data while it is in transit. It protects against man-in-the-middle attacks"..." en.wikipedia.org/wiki/HTTPS
All your professional has done is remove this security the entire internet relies upon and manipulated some api calls by getting in the middle of what would otherwise be a secure channel. There is no logical protection to this kind of attack because he has compromised the client (which is basically why they can't class it as a MITMA -- you're not in the middle of a secure communication you've replaced 1/2 of the system to make the whole thing unsecure).
I don't see a reason why "you are never going to be able to trust the client" should be translated to "let the client-side application be easy to break since it is impossible to make it impossible to break".
I think that if our employees are having to develop their own browsers to break into our application, they deserve the extra bucks they'll hack into their bonuses lol
The fact that the same professionals were unable to remove this security afterwards calmed down the people upstairs. Any attack of this sort is non-trivial at this point.
"You are never going to be able to trust the client and nothing you could add will change this."
I totally agree. The point is that you don't trust the client but you check if the command that the client did send is allowed to be executed by the credentials used for the session that submitted the command.
If the attacker has taken control of the client system after the session has been initialized, there's nothing you can do about that. Adding public key encryption on top will not help.
However, a client system controlled by the user who has logged in with correct credentials is not a problem as long as you don't trust any logic executed on the client. And if you don't trust any logic on the client, you don't need to sign anything by the client.
The communication between the client and the server is protected by the TLS with gives secrecy and authenticity guarantees for the client (assuming no client certificates as is typical). As a result, you provide service from the server to clients and clients connect using TLS connection and then pass data that is used to identify the session and the command. Then trusted environment (server) verifies if the data is valid for the session (e.g. session has not expired) and then trusted environment (server) verifies if the given session is allowed to execute the requested command. None of this requires any trust on any logic on the client.
"I think that if our employees are having to develop their own browsers to break into our application, they deserve the extra bucks they'll hack into their bonuses lol"
You shouldn't design or implement "security" which depends on lack of skills of the user base. Have you heard about GTP-4? That's only a start. And hopefully the business prefers to hire dumb people only to allow the "security" to work.
If you want to prevent the employees from giving themselves extra bonuses, the only correct way to avoid the security vulnerability is to compute the actual action ("give bonus X to person Y") in a trusted environment only, namely on server. Then the only question is who is the current session owner and does that session have required capabilities to grant the bonus. No amount of client modifications can bypass that check.
If you do something else, you have to be brutally honest and say that there's no security but obscurity by security â as in key under the doormat, absolute safe as long nobody notices or guesses it. And make sure to communicate this to the decision makers, too. Sometimes that may be enough but it shouldn't be mixed with real security.
Public key encryption is designed for use case where you want to send messages over untrusted medium and do not want to handle connection specific encryption keys. It cannot fix the problem where the message sender (client logic) is untrusted. And signing or encrypting the message after it has been generated in untrusted environment will not make the message trustworthy.
I am kind of confused by your reply.
The messages are not generated in untrusted environment. They are generated and signed in our trusted server. The client side can't sign messages. I think you missed something in the article.
Also, this is not an either-or. Continuous improvement of back-end security is not something you stop doing. Ever. Neither will we stop. The first action we took was fixing the API and doing a sweep on other endpoints.
However, as pointed out by the professional pentesters, this IS a problem, a critical problem, and as I can see from some of the replies to this article, a very ignored problem. People are way overconfident in their ability to perfectly secure their backend; as I was "pretty sure" we secured ours.
The majority of potential attackers will try to break something for a few hours or days, fail, and give up. This is protection (as opposed to security, I guess).
Imagine not putting a padlock on your locker because you know all locks can be picked by a sufficiently skillful lockpicker with sufficient time. What the padlock does is both raising the bar (a majority of people won't even try, and a majority of those who try will fail) and giving you time (if the lock takes 5 minutes to pick, you now have 5 extra minutes to react to the thief). Time we are using now to implement measures such as fuzzing (recommended to me in another response in this article) that will improve the strength of the back-end.
Yeah, it seems like I've misunderstood something if you create the signatures on the server. However, if the server creates the signature using private key and the client is verifying the data using the public key, how does this improve anything over simply sending the data over TLS connection?
As I understood the article, it seemed like the client was signing the data using the public key and the server was verifying the results using its private key. That would be an unsafe protocol.
The hacker used a specialized tool to bypass TLS connection (for himself only) and manipulate the responses from the server.
What we do is verifying the signature from the server (made with the private key) on the client (verified with the public key), and reject the data if it doesn't match.
As others pointed out, this doesn't make it impossible to manipulate the data (some suggesting things that...... aren't possible, which made me take what they say with a grain of salt), but the pentesters concluded it is sufficiently secure for now. For now being the keyword, they'll come back later this year, and I'll try to provide some follow up on what went down.
Why do you bother verifying the server signed data on the client if the data come through TLS connection? The attacker that can modify the TLS connection can also change the computed results of that verification.
Do you have some reason to believe that the client software would be intact but the attacker can MITM the TLS connection? I'm asking this because the way you describe the signature seems like this is the only attack that your method would prevent. All the situations I can think of allow modifying the client logic if TLS connection is not safe either.
If he tries to change the response from the API, the verification will fail, he can't fake a signature for the modified data because he only has the public key.
There are other mechanisms in place, such as SRI and CSP to name two, to help mitigate the attacker's ability to modify the source files (they were there for different reasons, but they helped during the second round of attacks where the hackers failed to break in after two days).
Mitigate being the keyword here, we are aware that they can puzzle their way into disabling those as well.
Both SRI and CSP depend on TLS for their security so if you don't trust TLS, you cannot trust SRI or CSP either. (This is because both SRI and CSP are optional features which are enabled with the data passed over TLS. If you think TLS is not safe, you cannot expect to be able to successfully pass the data to enable these features either.)
I have major trouble understanding the exact vulnerability class you're trying to combat here. Do you think TLS is safe or not?
And yes, CSP with the reporting feature turned on may help catch less skilled attackers while they try to attack the system. A skilled attacker will use tools that have CSP and SRI checks disabled so they will never trigger. As an alternative, they may be using setup where CSP and SRI do trigger but never leak that result to remote server.
It appears to me like you're thinking that you can trust the client (browser engine) but you cannot trust TLS. It doesn't seem like a reasonable assumption to make. For all cases where TLS can be bypassed the server submitted client logic can also be modified at will. For example, you can use the Burp Suite to also remove SRI and CSP from the headers and HTML just fine. You can also replace your custom JS code in place of the server provided code. Even a good adblocker such as uBlock Origin can do this.
Calling this setup mitigation instead of obfuscation seems incorrect to me. Typically mitigation would be about reducing the effects of a successful attack (e.g. sandboxing) and obfuscation is about making the attack harder without actually preventing it. This blog describes an obfuscation method, if I've understood it correctly.
Had the blog post been titled "Using public key encryption to obfuscate SPA client logic" or "Smoke and mirrors: DRM implementation for your SPA" I would have no problem because then the post wouldn't give false impression what's actually happening.
I hope you're able to see how your objections prove my point when they all start with "a skilled attacker". A skilled attacker can hack NASA.
You would understand the exact vulnerability if you would read the article again with the renewed understanding of our exchanges. The hackers said that the ability to effortlessly interact with admin controls was what allowed them to find vulnerabilities in minutes instead of several days as it takes now.
They recommended that mitigating this was critically important.
Also, your definitions are... a bit off. An example of obfuscation would be changing the "isAdmin" property to something like "hadhau1863an", so that the attacker wouldn't know what it is from simply looking at it. The purpose of the attribute would be >obfuscated<, so implementing something like Fractal as a security measure would be obfuscation.
Putting a wall around your castle is not obfuscation. Yes, it doesn't make it impossible for sufficiently experienced climbers to get in, if they have enough time to climb before we knock them down (the time it takes the attacker to get in is time we are finding and patching vulnerable endpoints), but it does protect the castle against the majority of attackers.
This measure wasn't designed against professional hackers (even though it helped against them in discernible ways) but against curious fiddlers, who are the likely attackers, since company employees are the only ones with access to the application.
I would argue that putting a wall around your castle is similar to obfuscation because it assumes that the attacker is moving on the ground. Whenever you're building secure software, you should start with the assumption that the attacker does the best move, not the move that is easy to prevent. This is not different from e.g. playing chess: if you make a move and opponent can make 5 moves of which 4 mean that you win the game and one means that you'll lose the game, you'll not win the game with 80% probability.
And yes, I used expression "a skilled attacker" to refer any attacker that is not blinded by the obfuscation a.k.a. smoke and mirrors. It seems like a pretty low bar for me, but I used word "skilled" to leave out script-kiddies.
How does public key encryption help when the message/command is generated by client? Remember that all clients are untrusted by definition because the attacker controls the hardware. Clients have all the data and keys you send to them and may or may not follow any logic you submitted to the client.
You cannot generate trusted data in untrusted environment so it doesn't matter if you then encrypt or sing that client generated now-untrusted data.
I think you got it backwards.
The message is generated and signed in the API.
I know they have access to any key we send them, that's why we only give them the public key, they can't sign messages with the public key, so they can't fake the data.
If the API (trusted server) signs the data, why do you need a signature at all? Wouldn't TLS already provide all the authenticity you need? The client can verify the connection (TLS + domain name) to the trusted server and anything it receives from the TLS protected connection is trusted.
I explain in the article that the attacker is able to bypass TLS by installing his certificate on his tool.
Yes, and that only affects that specific client. And as the client is always untrusted anyway, that doesn't change what the server can or should do.
If you run a service that sends HTML+CSS+JS to the client to implement the interface, you should think that as default implementation of the client and the end user not installing TLS bypass allows the end user to trust that he or she is actually running the default client implementation. The TLS connection is a guarantee to the end user that he or she is running the original data and software provided by the server.
TLS connection cannot prevent the client from running non-standard implementation (that is, executing some other logic than the default implementation provided by the server). And using public key encryption running on client hardware cannot prevent that either! That's the whole point. The only way you could pretend to prevent client from running non-default logic is some kind of DRM implementation, which cannot exist even in theory because it would be similar thing to perpetual motion matchine.
You can pretend to have a working DRM implementation similar to pretending you have a perpetual motion machine. If that's what you want to do, fine. But never ever think that it's a real thing or real security.
"Yes, and that only affects that specific client"
It doesn't have to affect other clients. I understand what you're saying, but it really doesn't apply to what the article is about. I think you're missing the point made by the pentesters: they marked this ability to easily manipulate responses as critical and recommended preventing it because it was the only reason they were able to break-in in the first place.
You also seem to be mistaking "security" for "protection" (and "protection" is what is claimed in the article). You don't put a padlock in your locker for "security", since any sufficiently skillful lockpicker with sufficient time will be able to break in. You put it for "protection". The majority of potential attackers won't even try to pick the lock, and the majority of potential attackers who try will fail, and even so, the time it will take for the lockpicker to pick it open can be enough for you to catch the thief on the act.
So the silly objections like "but this doesn't do anything because the attacker can roll his own CA, create his own browser, run it on his own operating system, running in the hardware he hand-made in his garage" are not properly objections to the solution implemented.
If you simply leave your locker without a padlock, people will open it and take your stuff. Big surprise.
The reason people use e.g. pin tumbler padlocks is either ignorance or cost. For software, implementing the correct stuff (that is, checking capabilities/permissions on server) requires about the same effort as doing it incorrectly (running trusted logic in untrusted environment, e.g. client).
My point is with the effort spent on "protection" you could have also implement real security instead. If you already had the incorrect implementation, sure, it requires more work to fix the whole implementation.
This "protection" will make attack a bit more complex but it cannot prevent it, unlike real security which requires doing the correct implementation.
(And yes, in case of digital security, you could argue that the attacker than brute force e.g. AES-128 encryption but physicists would then argue that the total energy needed would exceed the total energy of the Sun over its whole lifetime. That's much better level of security than the best mechanical lock you can get. And if you want high quality mechanical lock, the best options I've aware of are "Abloy Protec" and "Kromer Protector" safe lock. And of those, unmodified Abloy Protec has actually been picked in real life but that's really really hard. I know of three people in the whole world that can pick Abloy Protec.)
"will make attack a bit more complex but it cannot prevent it"
Then it serves it's purpose. I don't buy the argument "the effort spent on it would've been more useful elsewhere", because the effort to implement this was miniscule compared to the hundreds of hours already spent on implementing security measures on the API, and the hundreds (or maybe thousands) more that will take to make it technically impenetrable.
Read the article, comments and even the simple repo, and still don't understand the point of all this.
First, not related to the security problem but to the implementation of this "fix" - So you basically did some form of JWT, why didn't you just use JWT protocol in the first place, like you said already have for authorizartion. Your server can send a signed JWT token (the payload of which can be whatever your server needs, it's not restricted to auth usecases only, like in this case JSON.stringify(responseData)). And your client can just decode/verify it. If the current user-hacker tries to change this JWT token or it's payload it will fail. This are 2 lines of code, one in server and one in client, using the right libs which apperantly you already use for the authentication part.
Second it's best to describe what your app is doing but what I figured it's smething like:
If this is the case and you (or your boses) think that you've "secured" it with what you've done then obviously no need anyone to convince you otherwise. If this is not the situation then just explain what at all you are trying to protect and then people will willingly be happy to provide guidance and help
I need to verify the signature on the client, and JWT verifies it on the server (at least, that is how I learned it). This doesn't help in this case, because the hacker can intercept any attempt to contact the server to validate the signature and fake the response saying it passed.
I came across the repository "jose js" recently and it seems there is something "like" what I did there, but I couldn't make the time to get to know it yet.
I can't disclose details about the application. But it is like a 360-evaluation tool, and people's final score is related to their bonus. If, by messing around, they find a way to modify their scores, this could impact their bonus.
The hackers reported this as a critical issue because of the profile of the potential attackers; employees with low tech skills and good incentives to mess around. Looking back, maybe I should have made it clearer on the article. I expected people to just "get it", but I guess I shouldn't. Lesson learned.
Many people have provided helpful guidance, and I gathered a lot of useful information to discuss with the team. We're fuzzing the API to battle test our validations, for example.
The JWT's payload can be verified anywhere, successfully decoding it is actually the verification, if the payload is tempered then decoding/parsging it will fail. It is most likely what you already do with the auth JWT , you receive from server JWT with lets say payload claims like "user:xxx", "admin:false", "prop:value", so client verifies it by successfully decoding it and sees "Aha, the payload say user:xxx, prop:value, ..." and so on. If someone doesn't matter who, man-in-the-middle or same user tempres it and tries to put "user:yyy", "admin:true" then the decoding will just not be possible. Read it more properly on jwt.io/ , I'm not native english speaker.
Thanks, I'll read, but as I understand, the decoding of a JWT is simply parsing it's content as base64, it would still need the secret to validate it, so that's why it happens on the backend... perhaps I'm missing something, so I'll look into it. It is possible that JWT accomplishes what I needed, but we simply didn't know at the time.
Thank you very much.
There's two main types of JWT, and inside those there's a selection of cryptographic cyphers you can use.
You can sign a JWT with an RSA private key on your backend and verify it using a public key on your frontend, or any on any API endpoint.
This type is JWS, And as you mentioned, this version is just base64 encoded data, but with exactly the sort of cryptographic signature you're after.
The other type is a JWE, and in this form the entire payload is not only signed but encrypted, so you cannot see the payload in flight.
Again, this can be decoded and verified on both the front and backend.
Cool. JWS seems to work like what I did. Could've saved me some time, but I still enjoyed building this as I learned a lot.
JWE I suppose the front would need to have the secret, so it wouldn't really help. But I guess it can be good for server to server communication?
Thanks for the info.
Both JWS and JWE can work either with PSK or public private keys.
It depends on the crypto chosen.
Using RSA or Eliptic curve would work with public private keys, just as your solution did. With these the front end would only need the public key to (decode JWEs &) verify the JWT.
Nothing about JWTs is limited to backend, it's just as applicable to frontend.
If admin elements where embedded in the front-end, the api âinceptionâ to reveal them didnât matter, a hacker could just look in the HTML to find the form or simply use chrome dev tools to customize the api response with âisAdmin=trueâ with dev tools to reveal your form. Your main issue lies in your backend.
A good rule of thumb is never trust the front end because it can be anything. It can even be the Postman instance I just started up.
Now when you went on the RSA, you completely lost me. Itâs a lot of work for little benefit, work I see as not worth it. A hacker can still send malformed requests, it just takes a little more effort and youâre right back at step 1.
Secure your backend!
It wouldn't be so simple in the case of a React app, the elements are not simply hidden in the HTML, but yes, with infinite time an attacker can figure out anything, but they don't have infinite time.
The hacker cannot manipulate responses because they are not signed in the front-end with the public key, which is the only key he has.
This is not either-or. Secure both. You shouldn't make it easy to break just because you can't make it impossible to break.
I donât mean to be rude, but I canât understand what youâre trying to say.
The RSA signing code is in the front-end right. That means a hacker can malform and create their own api requests or inject a payload to modify the response since they have the signing code so itâs not a matter of them having âinfinite timeâ it can be done in a matter of 5 minutes thatâs what Iâm trying to say.
For the reasons stated above is why I say secure your backend. You say itâs not one of the other, I donât have to use your web application. Like I said I can spin up a http client, extract your RSA code and youâre right back at step 1, but I can only your 1 backend.
You get what Iâm saying? Your RSA is useless.
"I donât mean to be rude, but I canât understand what youâre trying to say"
Neither am I, but why bother replying in such affirmative manner if you didn't even understand? That's not only rude, its pedantic. Read the article before engaging, please.
"The RSA signing code is in the front-end right"
No. Read the article, please. The front-end VERIFIES the signature. The signing code is in the BACK END. The front-end only has the PUBLIC key.
"extract your RSA code and youâre right back at step 1"
You can VERIFY messages, you CAN'T SIGN them, which means you CAN'T CHANGE them.
"You get what Iâm saying?"
Do you?
No I didnât mean I didnât understand your article. I understand your article thatâs why I was replying affirmatively. I didnât understand your initial reply, which seemed like abstract ideas, thatâs what I was saying I didnât understand, asked for clarification then asked you to see my side by saying âyou get what Iâm sayingâ but you took it in an entirely different direction.
My last points:
Cheers
"No I didnât mean I didnât understand your article"
But you didn't, you claimed twice that I was signing messages on the front end, which in the article itself I explain is a bad idea.
About your points:
Yes, that is why securing the API is important. This is not what the article is about. The article is about the attackers faking the responses from the API.
I have never seen this being done, but I won't say it can't be done, it probably can. But so what? The application will immediately stop working as soon as you try to change the response.
You're not the first to make this claim, and I'm not saying it can't be done, it probably can, given enough time, but how? The professional pentesters couldn't break it, and they had two full days to try, and full knowledge of how the solution was implemented. You can't simply change the source files in devtools in your browser and have the new code be executed (you can change it, but it won't reflect on the code that is actually running. Test it), that's not how any of this works.
If it can be done, it is not as trivial as you're probably thinking. Which brings us to the report's conclusion: "sufficiently secured for now".
Inserting modified code into a web application is very easy to implement using almost any proxy software. For example, we can take the same Burp Suite, intercept the js file response and replace it with our modified version.
Application stops working? It's my browser, my client. Once my client downloads your application I can do whatever I want no matter what you think. If I visit your application from my browser, it will not stop working because I won't allow it.
Anyone could change the api response to anything they want, no matter what encryption or whatever fancy thing your api is sending back because I CONTROL THE CLIENT not you. I can change your API response to whatever I want.
Yes you can change source files to whatever you want, I don't know why you think you can't, where is that idea coming from? I just did it right now for dev.to just cause I can, as I would do with your site.
Again I'm not trying to be rude, you seem to gaps in your knowledge of the browser based on your other responses and you seem to put too much faith into this backend api signing function and underestimate how much control users really have. I'm trying to tell you its trivial BECAUSE IT IS.
I want you to have a secure application at the end of the day, thats why I'm saying focus your energy to where it needs to be NOT ON THE CLIENT WHERE I HAVE FULL CONTROL and you can't do anything to stop me...
Unless... you have a secure backend đ.
Report "Sufficiently secured for now" is more like a false sense of security.
This was one of the things the hackers tried. This was, if not prevented, at least mitigated by SRI, CSP, and other measures that were already in place.
I am sure with enough time and effort they could eventually overcome the security layers. Eventually. In any case, the client is sufficiently secured for now.
Yeah......... you haven't read the article. Nor my responses, for that matter.
We greatly invalidated the damage you think you could cause with your "full control". Sure, you can try to change something, but then it won't work. Enjoy your "full control" over a non-working application.
Enjoy the fake sense of security which is easily defeated by a right click and inspect element! Trust me you haven't read my responses or anyone elses, otherwise you would understand the flaw by now. It's been pointed out like 3 times by previous commenters.
To each their own, Cheers!
I am almost tempted to give you access to the development environment of the application just to watch you fail. Sadly, it would break company rules.
You haven't read the article, you haven't read the responses, but you're 100% confident you could break this doing something you don't even know you can't do (at least not in any way remotely as trivial as you're suggesting), probably because you haven't tried.
Likewise to you my friend, just remember you haven't properly refuted any claims that I've made nor anyone else have made. You just keep repeating the same thing thinking it covers all your bases and it doesn't, your change is next to useless. But I'm not the the user (gladly) so I'll leave it at that.
I would love to get the dev enviornment, please do! At Google I've seen all sorts of security protocols, even broke a few myself and seeing the details of your "front-end security" is laughable. That's why I'm warning you. But hey.
Cheers, I won't be responding after this.
This kind of sounds like security by obscurity.
Not at all. Its security by "you can't change how the application is supposed to work".
What's stopping me from making my own modified version of the client? Client side applications are not supposed to be "protected" or anything, since anyone can theoretically modify them and change the client-side behavior. If there's any secrets in the client then you're doomed already. But if everything is secured via protected API routes then there's nothing to worry about.
The post makes it sound like you're trying to protect against client-side modification through tools like Burp Suite. But that's the wrong way to look at the problem, since anything client side should basically be considered compromised. Your goal is not to block tools like Burp Suite. All that tool does is allow you to play with the requests that are made. There are many other tools out there to do things like that.
So basically it's the client side that does the signature verification, so I could simply copy your app's code using the dev tools in the browser, and then make a modified version that removes the client-side signature check. And if the client sends signed messages to the server, then I can just find the key (it has to be given to the client at some point) and make my own API requests using a custom script that adds the signature. Yes it's way more difficult, but that's the definition of security through obscurity.
In some situations it might make sense to implement something like this, say for a client-side game or something where you want to make cheating more difficult, but the moral of the story is that the client should never be trusted, and your server API is what handles security.
It is not the client sending signed messages to the API, it is the API that sends signed messages to the client. The only key the client has access to is the public key, which is not enough to sign messages. About client-signed messages, I wrote in the article thats exactly the reason I objected to something like HMAC. It would take no effort to find the keys, and then it would definitely be security through obscurity. Protecting the API is done by validation of access-level and of data. The validation is to prevent modification of the responses from the API to the client.
I had this exact concern, but tested a lot if it was possible to somehow remove the line (or add a "return true") to the function on the browser "source", but it never worked. How would you run this modified version? You can place breakpoints, and they'll be hit, but modified code won't run AFAIK. You couldn't simply copy the code, modify it and execute it from localhost or other domains and have it working. because the access-control-allow-origin headers are tight. Unless you're aware of another tool that can do that?
The ethical hacker himself was unable to disable it, and he had a couple of days to mess around with it, and we explained to him how the mechanism worked, so it wasn't like he was operating from "obscurity". I'm sure with enough time, resources, and dedication, a motivated attacker can figure the system out from what is publicly available in the client, but with enough time, resources and dedication an attacker can hack into anything so... what's your point?
Also, this solution doesn't only block "Burp Suite", it blocks this vector of attack as a whole, any local proxy will fail.
You're assuming that CORS will protect your API from rogue clients. It will not. CORS protects your users from rogue clients making requests on behalf of the legitimate client. You can run a browser in unsecured context and bypass CORS. You can install a browser extension and bypass CORS. You can call the API directly from an HTTP client and fake the origin. Please read on what CORS tries to protect you from because it seems you have a misconception of it.
What you implemented here is still just an extra step, but no more secure than just testing if your code is running under your domain, for example. The same way I can get your code to fake the domain I can get it to fake that the signatures are correct. You just made it slightly more difficult to access admin routes.
The real thing that gave you the "secure enough" assessment was fixing the API to not allow a rogue client to create admin accounts. The whole RSA "workaround" just made it slightly harder for an attacker to instruct your client to do what the attacker wants.
I hope you can see how even the examples you brought are evidence of how the barrier to discovery was greatly elevated from this simple action alone.
Your objection is boiling down to "but with infinite time, patience and motivation an attacker will figure it out from what is publicly available", which leads to a dangerous "better to let the client be easy to break since it is impossible to make it impossible to break".
When you say it "just made it slightly harder", I wonder if maybe you're aware of something that the professional pentesters weren't? Would you mind cloning the repository and briefly explaining how you'd break it in practice? Because some of your suggestions won't work.
I hope you can take feedback, as it looks like we attacking your ideas hurt your ego as you were heavily invested in something you found really cool to learn and implement.
Does not seem the repo you sent is representative. Host your production client somewhere and let us play with it.
People giving bad feedback doesn't hurt.
It is representative. The code is about the same.
So it is as bad as I imagined. I literally did this in 5 minutes.
If you did, how? You obviously didn't used devtools, as I'm explaining in another comment.
Care to explain what you are trying to convey with the attached image?
Just as a test to see if the changes made to the source files are applied. They're not. The browser is not webpack. You cannot change the readable source files and have the changes create a new compiled version. This is not how the devtools work.
In the image I changed the function in the source files the way you're suggesting you can, to purposefully fail the verification. If this worked, the application on the left wouldn't show any data after a page refresh. It is still working, because these changes were not reflected.
Then it does not seem you used the tool correctly. My screenshot above was taken from devtools.
Again, the browser does not need to be Webpack for this to work - you are misinterpreting how this works.
I posted the screenshot showing the correct use.
I'll go through my notifications again, but I've seen no screenshot from you. Just texts.
The browser would have to be webpack in order for you to modify readable code and have it compile into a new working version. To do something remotely similar to what you want, you would have to modify the compiled version, which is not trivial and requires understanding of how React works under the hood. Any attempt at calling this trivial is.......... well, there are no polite ways of saying it, so I won't.
Modifying the compiled version is precisely what you need to do. But you don't need to understand how react works under the hood to do that.
It seems to me you're modifying the code translated from source maps and that won't work. As pointed, this is a misunderstanding of how the overrides tool works.
It's not because you don't know how to do it that it is not trivial, by the way.
"It seems to me you're modifying the code translated from source maps and that won't work"
Yes. I'm saying that it won't work for some 10 comments by now.
"As pointed, this is a misunderstanding of how the overrides tool works"
Yes, it IS a misunderstanding: of the people suggesting this is as something that trivially works as they naĂŻvely thought. It doesn't. You have to change the compiled code, which is not trivial.
Thanks for the write-up, but are you implying that everyone building an API + SPA should go and add this extra encryption layer on top of HTTPS/SSL?
I feel we're then sort of duplicating things, since this is what SSL/HTTPS was meant for ... if that isn't sufficient, and we really need this kind of "extra" thing on top, then would this not already have been made more or less a "standard" recommendation in this type of architecture?
Besides, well, if you know how to use Chrome DevTools then you can already "manipulate" a lot of what's being HTTP-posted to the server - you can (with some effort, but it's not really difficult) bypass most of the "checks" done by the frontend.
That's why (as others have said) you can simply never trust the client - all of the business logic, validations, authorization checks, and so on, need to be enforced server side - and if you do that, then in most cases this extra "layer" doesn't add much value, in my book.
But anyway it's interesting, and you got me thinking, not about adding this exact solution, but about "what if" scenarios (client device being hacked) and how to mitigate risks.
I agree with everything you said, but we came to a different conclusion about the value added by this layer.
It is like putting a padlock on your locker. It won't stop highly skillful and motivated attackers for long, but it is definitely not useless, because the vast majority of people won't try, and the majority of people who try will fail, and it will still take time for even specialized attackers to get through. And this time is valuable, since we're constantly improving the security of the back-end. This time could be the difference between a vulnerability being found and being patched.
Yes sure, absolutely - as with almost everything in software development, "it depends" - I can certainly imagine that there are scenarios or use cases where this is a very useful technique ... dismissing an idea too hastily is one of the most common mistakes (and something we're almost all guilty of, including myself).
Very interesting article, thank you for sharing it!
I have only few questions.
I mean both the frontend and backend applications are accessible only through the HTTPS protocol. They're in different domains, and each have it's own certificate.
I have not heard of it before, however, I just looked what it is, and I'm not sure if it would solve the problem. The hacker has access to the certificate his browser would trust, and he somehow imported it into his tool. He is not sending a fake certificate, he is sending a trusted certificate (as far as I understood his explanation).
I think that the hacker would need to "compromise" in some way the user's browser, for example the hacker could install a fake CA root certificare in the user's browser otherwise he would not be able to tamper the request/response.
The SSL pinning does just that, in fact even if the hacker is able to compromise user's browser, given that the server SSL certificate Is pinned inside your application then response can't be tampered without your application noticing it.
Think of this attack as a malicious user trying to break things to his advantage (the tool is used by the company to calculate a yearly bonus paid to each employee based on their performance, so there is motivation to try). In this case, the user's browser is the hackers browser.
In a sense it is not a "man in the middle", because it is not a third-party, it's the user himself trying to mess around.
At risk of gathering more attention (!), now that we know more about the context and threat model here (ie: the legitimate users are the likely attackers), are there other risk mitigating controls that you have / could have to reduce the risk to the business? Things that come to mind (in no particular order):
These are awesome suggestions, thank you very much.
The API has exponential throttling for the same IP or same user (it helped us check the DoS box). We log requests responded with 403 (forbidden). I'll talk to devops to see if they can set some sort of alert on it. Will definitelly be helpful.
Some actions are auditable and revertible. Not all, though, we can definitelly improve that.
Your third suggestion is excellent. We've been planning on integrating the app with the company's support platform, and having grants be handled by tickets flowing through a series of approvals. Gotta carefully secure that communication, though.
The last point is something we already do. Developers have no admin access in production.
Gastei algumas horas lendo o artigo e todos os comentĂĄrios.
Em parte, gostei do artigo, traz uma boa discussão, e pude parar para perceber como seria uma implementação de uma biblioteca JWT (concordo com a pessoa que disse isso, basicamente tu implementou um JWT) e achei interessante a forma como o fez.
Entretanto me vejo representado um pouco no prĂłprio autor, nĂŁo o "eu de hoje", mas o "eu do passado". Eu tambĂ©m fui um desenvolvedor assim, muito seguro de si, que achava que o que eu fazia era o que era bom e pronto, nĂŁo aceitava crĂticas, etc... Mas aprendi com a vida que nĂŁo Ă© bem assim, e que as outras pessoas tem muito o que contribuir para minha evolução, bastava apenas que eu me deixasse ouvir e beber do conhecimento dos outros.
Em sĂntese, nĂŁo existe uma forma certa de fazer uma coisa, mas existem vĂĄrias formas certas de se chegar ao mesmo resultado, e foi observando cada forma, cada experiĂȘncia e cada conselho que hoje sou quem sou.
NĂŁo sou eu quem vou lhe dizer que o que fez nĂŁo tem valor ou que nĂŁo serve de muito quando a solução real estĂĄ no back-end (e estĂĄ), se vocĂȘ nĂŁo quer se dar por convencido disto. O que posso realmente lhe dizer Ă©: ouça (neste caso, leia) as pessoas, sendo mais experientes ou nĂŁo do que vocĂȘ, vĂŁo sempre lhe trazer uma luz e questĂ”es relevantes. Mesmo meus alunos mais leigos em desenvolvimento me ensinaram algo, entĂŁo absorva.
Reconhecer que tomou uma decisĂŁo "nĂŁo tĂŁo boa" nĂŁo Ă© uma derrota, mas um aprendizado. De prĂłxima vez, vocĂȘ jĂĄ saberĂĄ qual caminho "nĂŁo seguir", e entĂŁo aumente sua base de conhecimento Ă partir daĂ.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.