Credits to https://blog.1password.com/what-is-public-key-cryptography/ for the cool image.
TL;DR:
Check this repository for a simple example in NextJS of how to achieve this. Reading the article is recommended, though, for context on why this is useful. Don't forget to give a star to the repository 😁.
Disclaimer
Despite having worked as a software engineer for the past decade, I am not a cryptographer and am not a cybersec specialist. I’m sharing this from the perspective of a developer who was tasked with fixing a bug. I recommend doing your own research on the subject, and always inviting ethical hackers to pentest your applications. Always rely on experts when it comes to security.
Introduction
Recently the application I’ve been working on for little more than a year went through a “pentest” (Penetration Test, where hired ethical hackers will try to invade your application and report your weaknesses, so you can fix them. This is a very useful tactic for cybersecurity). It was the first time this system was put through such a procedure.
The System
The system is comprised of a front-end SPA built with ReactJS, and a back-end API built with Node.JS. As a software engineer with some 10 years of experience under my belt, I designed both to be resistant to the usual culprits.
- SQL Injection;
- XSS;
- CSRF;
- DoS;
- MITM attacks;
I won’t focus on those, but I recommend you to extensively research any of the above terms you’re not familiar with. I was confident, but I was in for a wild ride.
The Report
All of these security measures were praised on the final report. However, there was one attack that was able to get through; a particular form of man-in-the-middle attack that allowed the hacker to escalate his access level.
The application itself is protected using SSL certificates on both ends, so the data was pretty secure while in transit. However, the hacker used a specialized tool called Burp Suite to set up a proxy on his machine using the certificate on his browser. This proxy routes the network requests to and from the tool, and makes both ends believe it is legitimally coming from each other. This allowed him to modify any data he wanted.
The Attack
He could effectively fake what the API was sending back to the browser, or fake what the browser was sending to the API. So it isn't exactly a... man... in the middle. It wasn't a third-party stealing or changing the information, but is was still a new layer in between that allowed for an attacker to do things the application probably isn't expecting him to be able to do, and this can break things.
I have never seen such an attack before. I didn't even think this was possible. My fault, really, as the hacker said this is a very common vector of attack of SPAs, which must rely on information passing through the network to determine what the user can see and do (such as showing up a button that only an admin should see, for example).
From there, all the hacker had to do was figure out what-is-what in the responses to make the browser believe he was an admin (for example, changing an "isAdmin" property from "false" to "true"). Now he could see some things he wasn’t supposed to see, such as restricted pages and buttons. However, since the back-end validates if the person requesting administrative data or performing administrative actions is an admin, there wasn’t much he could do with this power... we thought... that was until he found a weakspot.
It was a form that allowed us to quickly create new test users. It was a feature no normal users were supposed to ever see, and one that was supposed to be removed after development, so we never bothered protecting it, and since the body of the request was specifically creating a "normal user", we never stopped to think about the security implications. It was never removed, we forgot about it.
Then the hacker used the proxy to modify the body of the request, and managed to create a new user with true admin power. He logged in with this new user and the system was in his hands.
I know, it was a bunch of stupid mistakes, but are all your endpoints protected? Are you SURE? Because I was “pretty sure”. Pretty sure is not enough. Go double-check them now.
The Debate - Damage Control
Obviously, the first thing we did was deleting his admin account and properly gating the endpoint he used to create the user, requiring admin access and preventing it from accepting the parameters that would give this new user admin access. Turns out we still needed that form for some tests and didn't want to delete it just yet. We also did a sweep on other endpoints related to development productivity to confirm they were all gated behind admin access, and fixed those that weren't.
The Debate - SSR?
The cat was out of the bag. We needed a solution. We still had to prevent attackers from seeing pages and buttons they weren't supposed to see. Moving the whole React app to a NextJS instance was considered, so we could count on the SSR for processing the ACL. Basically, we would check the components the user should be able to see on the server side, this information would not be sent through the network, so it couldn’t be faked. This is likely the best approach to solving this, and it will be done in the near future, but that will be very time-consuming (and isn't always viable) and we needed a solution fast.
The Debate - What would the solution even look like?
So, we needed a way to verify that the message sent by the API was not tampered with. Obviously we needed some form of cryptography. Someone suggested HMAC, but the message couldn’t simply be encrypted using a secret shared on both sides, because since the hacker had access to the source code on his browser, he could easily find the secret and use it to encrypt any tampered response, so something like HMAC (and pretty much any form of symmetric cryptography) was out of the gate. I needed a way to sign a message on one side, with the other side being able to verify that the signature is valid, without this other side being able to sign a message.
The Debate - The solution
Then we realized: this sounds a lot like the public-private key pair, like the ones we use for SSH! We will have a private key that stays on the environment of the API, which we will use to sign the response, and a public key that is compiled in the front end to verify the signature. This is called asymmetric cryptography. BINGO! We would need to implement something like RSA keys to sign and verify the messages. How difficult could it be? Turns out… very difficult. At least if you, as me then, have no idea how to even start.
The implementation - Creating the keys
After hours of trial and error, using several different commands (such as using ssh-keygen
and then exporting the public key to the PEM
format), I managed to find the commands that create the keys properly. I’m not a cryptographer and can’t explain in detail why the other commands I tried were failing later in the process of importing the keys, but from my research I could conclude that there are several different “levels” of keys, and the ones used for SSH are not the same “level” as the ones created by the working command.
These are the ones that worked.
For the private key:
openssl genrsa -out private-key-name.pem 3072
For the public key:
openssl rsa -in private-key-name.pem -pubout -out public-key-name.pem
You can change the number of bits in the first command, they represent the number of bits that the prime numbers used in the algorithm will have (which is a gigantic number), but keep in mind that you will have to change some other things later.
As a rule of thumb, more bits = more security but less speed
.
The implementation - The Back-end
Implementing this on the back-end was very straightforward. NodeJS has a core library named crypto
, that can be used to sign a message with few lines of code.
I wrote a simple response wrapper to do this. It expects an input that looks something like this:
{ b: 1, c: 3, a: 2 }
And its output will look something like this:
{
content: { b: 1, c: 3, a: 2 },
signature: "aBc123dEf456"
}
But I immediately ran into problems, which I’ll quickly go through, as well as briefly explain how I solved them.
- When you stringify javascript objects into JSON, they don’t always keep their “shape” letter-to-letter. The content remains the same, but sometimes, properties appear in a different order. This is expected behavior for JSON and is documented in its definition, but if we are going to use it as a message to be signed, it MUST be equal, letter to letter. I found this function that can be passed as the second argument to
JSON.stringify
to achieve exactly what we need; it orders the properties alphabetically, so we can count they will always be stringified in the correct order. This is what the function looks like.
export const deterministicReplacer = (_, v) => {
return typeof v !== 'object' || v === null || Array.isArray(v) ? v : Object.fromEntries(Object.entries(v).sort(([ka], [kb]) => {
return ka < kb ? -1 : ka > kb ? 1 : 0
}))
}
const message = JSON.stringify({ b: 2, c: 1, a: 3 }, deterministicReplacer)
// Will always output a previsible {"a":3,"b":2,"c":1}
- Just to avoid dealing with quotes and brackets, that were causing headaches due to sometimes being “escaped” in some situations, resulting in different strings, I decided to encode the whole stringified JSON into base64. And this worked initially.
Buffer.from(message, 'ascii').toString('base64')
- Later I had problems because I was reading the encoding of the input string as ASCII, turns out that if the message contains any character which takes more than 1 byte to encode (such as an emoji or bullet point), that process would produce a bad signature that the front-end was unable to verify. The solution was using UTF-8 instead of ASCII, but this required modifications to how things were being processed in the front end. More on this later.
Buffer.from(message, 'utf-8').toString('base64')
This is what the final working code for the back end part looks like:
import crypto from 'crypto'
import { deterministicReplacer } from '@/utils/helpers'
export const signContent = (content) => {
const privateKey = process.env.PRIVATE_KEY
if (!privateKey) {
throw new Error('The environmental variable PRIVATE_KEY must be set')
}
const signer = crypto.createSign('RSA-SHA256')
const message = JSON.stringify(content, deterministicReplacer)
const base64Msg = Buffer.from(message, 'utf-8').toString('base64')
signer.update(base64Msg)
const signature = signer.sign(privateKey, 'base64')
return signature
}
export const respondSignedContent = (res, code = 200, content = {}) => {
const signature = signContent(content)
res.status(code).send({ content, signature })
}
The implementation - The front-end
The plan was simple:
- Receive the response with the content and the signature.
- Deterministically stringify the
content
(using the samedeterministicReplacer
function we used in the back-end). - Encode it in base64 as an UTF-8 string, just like in the backend.
- Import the public key.
- Use the public key to verify this message against the signature in the response.
- Reject the response if verification fails.
I searched around for libraries like crypto
for the front-end, tried some of them, but in the end came up empty-handed. It turns out this library is written in C++, and can’t run on the browser, so I decided to use the native Web Crypto API, which seems to work well on modern browsers.
The code for steps 1-3 is quite long and uses a few nearly unreadable functions I found around the internet and then modified and combined in a way to normalize the data in the format that is needed. To see it fully, I recommend going directly to the files rsa.ts and helpers.ts.
For steps 4-5, I studied the WCAPI docs to figure out that the function to import the public key expects the data to be in the form of an ArrayBuffer (or others, check docs for reference). The keys naturally come with a header, a footer, and a body encoded in base64 (which is the actual content of the key), this one is encoded as ASCII so we could just use the window.atob
function. We need to strip the header and footer, and then decode it to get to its binary data.
This is what it looks like in code.
function textToUi8Arr(text: string): Uint8Array {
let bufView = new Uint8Array(text.length)
for (let i = 0; i < text.length; i++) {
bufView[i] = text.charCodeAt(i)
}
return bufView
}
function base64StringToArrayBuffer(b64str: string): ArrayBufferLike {
const byteStr = window.atob(b64str)
return textToUi8Arr(byteStr).buffer
}
function convertPemToArrayBuffer(pem: string): ArrayBufferLike {
const lines = pem.split('\n')
let encoded = ''
for (let i = 0; i < lines.length; i++) {
if (lines[i].trim().length > 0 &&
lines[i].indexOf('-BEGIN RSA PUBLIC KEY-') < 0 &&
lines[i].indexOf('-BEGIN RSA PRIVATE KEY-') < 0 &&
lines[i].indexOf('-BEGIN PUBLIC KEY-') < 0 &&
lines[i].indexOf('-BEGIN PRIVATE KEY-') < 0 &&
lines[i].indexOf('-END RSA PUBLIC KEY-') < 0 &&
lines[i].indexOf('-END RSA PRIVATE KEY-') < 0 &&
lines[i].indexOf('-END PUBLIC KEY-') < 0 &&
lines[i].indexOf('-END PRIVATE KEY-') < 0
) {
encoded += lines[i].trim()
}
}
return base64StringToArrayBuffer(encoded)
}
The final code to import the key looks like this:
const PUBLIC_KEY = process.env.NEXT_PUBLIC_PUBLIC_KEY
const keyConfig = {
name: "RSASSA-PKCS1-v1_5",
hash: {
name: "SHA-256"
},
modulusLength: 3072, //The same number of bits used to create the key
extractable: false,
publicExponent: new Uint8Array([0x01, 0x00, 0x01])
}
async function importPublicKey(): Promise<CryptoKey | null> {
if (!PUBLIC_KEY) {
return null
}
const arrBufPublicKey = convertPemToArrayBuffer(PUBLIC_KEY)
const key = await crypto.subtle.importKey(
"spki", //has to be spki for importing public keys
arrBufPublicKey,
keyConfig,
false, //false because we aren't exporting the key, just using it
["verify"] //has to be "verify" because public keys can't "sign"
).catch((e) => {
console.log(e)
return null
})
return key
}
We can then use it to verify the content and signature of the response like so:
async function verifyIfIsValid(
pub: CryptoKey,
sig: ArrayBufferLike,
data: ArrayBufferLike
) {
return crypto.subtle.verify(keyConfig, pub, sig, data).catch((e) => {
console.log('error in verification', e)
return false
})
}
export const verifySignature = async (message: any, signature: string) => {
const publicKey = await importPublicKey()
if (!publicKey) {
return false //or throw an error
}
const msgArrBuf = stringifyAndBufferifyData(message)
const sigArrBuf = base64StringToArrayBuffer(signature)
const isValid = await verifyIfIsValid(publicKey, sigArrBuf, msgArrBuf)
return isValid
}
Check the files rsa.ts
and helpers.ts
linked above to see the implementation of stringifyAndBufferifyData.
Finally, for step 6, just use the verifySignature function and either throw an error or do something else to reject the response.
const [user, setUser] = useState<User>()
const [isLoading, setIsLoading] = useState<boolean>(false)
const [isRejected, setIsRejected] = useState<boolean>(false)
useEffect(() => {
(async function () {
setIsLoading(true)
const res = await fetch('/api/user')
const data = await res.json()
const signatureVerified = await verifySignature(data.content, data.signature)
setIsLoading(false)
if (!signatureVerified) {
setIsRejected(true)
return
}
setUser(data.content)
})()
}, [])
This is obviously just an example. In our implementation we wrote this verification step into the “base request” that handles all requests in the application and throw an error that displays a warning saying the response was rejected in case the verification fails.
And that’s how you do it. 😊
Notes on Performance
We thought this could heavily impact the performance of the API, but the difference in response times was imperceptible. The difference we measured in response times was on average less than 10ms for our 3072-bit key (and the average was a bit less than 20ms for a 4096-bit key). However, since the same message will always produce the same signature, a caching mechanism could easily be implemented to improve the performance on “hot” endpoints if this becomes a problem. In this configuration the signature will always be a 512-byte string, so expect the size of each response to be increased by that much, however, the actual network traffic increase is lower due to network compression. In the example, the response for the {"name":"John Doe"}
JSON ended up with 130 bytes. We decided it was an acceptable compromise.
The Result
The same ethical hacker was invited to try to attack the application again, and this time, he was unable to. The verification of the signature failed as soon as he tried to change something. He messed around with it for a couple of days and later reported he couldn’t break this. The application was declared sufficiently secure... for now.
Conclusion
This works, but I'm not going to lie: not finding comprehensive material on how to do this for this purpose made me question if this is even a good solution. I thought of sharing this mostly as a way to have it analyzed and/or criticized by wiser people than myself, but more importantly, as a way to warn other developers of this attack vector. I also wanted to help others implement a possible solution for this, since it took me a couple of days of trial and error until I was able to figure out how to make everything work together. I hope this saves your time.
All of this has been condensed into a simplified approach in NextJS and is available in this repository.
Please leave a star on it if you find it helpful or useful.
Please feel completely free to criticize this. As I said, I am not a cryptographer or a cybersec specialist, and will appreciate any feedback.
Latest comments (134)
Gastei algumas horas lendo o artigo e todos os comentários.
Em parte, gostei do artigo, traz uma boa discussão, e pude parar para perceber como seria uma implementação de uma biblioteca JWT (concordo com a pessoa que disse isso, basicamente tu implementou um JWT) e achei interessante a forma como o fez.
Entretanto me vejo representado um pouco no próprio autor, não o "eu de hoje", mas o "eu do passado". Eu também fui um desenvolvedor assim, muito seguro de si, que achava que o que eu fazia era o que era bom e pronto, não aceitava críticas, etc... Mas aprendi com a vida que não é bem assim, e que as outras pessoas tem muito o que contribuir para minha evolução, bastava apenas que eu me deixasse ouvir e beber do conhecimento dos outros.
Em síntese, não existe uma forma certa de fazer uma coisa, mas existem várias formas certas de se chegar ao mesmo resultado, e foi observando cada forma, cada experiência e cada conselho que hoje sou quem sou.
Não sou eu quem vou lhe dizer que o que fez não tem valor ou que não serve de muito quando a solução real está no back-end (e está), se você não quer se dar por convencido disto. O que posso realmente lhe dizer é: ouça (neste caso, leia) as pessoas, sendo mais experientes ou não do que você, vão sempre lhe trazer uma luz e questões relevantes. Mesmo meus alunos mais leigos em desenvolvimento me ensinaram algo, então absorva.
Reconhecer que tomou uma decisão "não tão boa" não é uma derrota, mas um aprendizado. De próxima vez, você já saberá qual caminho "não seguir", e então aumente sua base de conhecimento à partir daí.
I don't, this is what my implementation does, but you're saying it is useless, when it is exactly what JWT does. I just didn't know then that it could be verified with a public key, so I built my own.
Then my implementation does EXACTLY what JWT does. Yet you're bashing it as if it's useless.
Then your code simply won't work at all. You should know that the name of the functions and variables in the compiled code won't be the same as in the source code. Simply targeting a "setState" function won't accomplish what you believe it will.
You failed to see that this is exactly what JWT does. I just didn't know JWT could be used like that before, and ended up creating my own implementation of "JWT". I will probably migrate the whole thing to JWT, but everything that I learned during the process was still valuable; including that JWT is not 100% secure, for the same reasons presented against my implementation in this discussion.
It is silly and naïve to believe users can't fiddle around just because they don't know how to use hacking tools. They could modify the responses in devtools, for example. Trying to do that will break the application due to this measure.
I came here looking for feedback and criticism, and valuable feedback and criticism was provided. Just not by you. It seems you can't take feedback about your ability to provide feedback.
Your pull request ultimately would not disable the signature verification, but not only that, it would probably not do anything at all, since you're changing the React source code and not the compiled version the browser actually reads. As I said several other times, the browser is not webpack and it won't compile a new version for you. You would have to go deeper.
You completely failed to understand that the potential attackers are legitimate users with no tech skills but incentives to fiddle around. Any hacker that could bypass the signature could also delve into the source code to find the endpoints and then continue from Postman or other such tool. This is NOT "who" we are protecting against. It is silly to say things like "security should be on the backend" as an objection to this, because not only it completely misses the point, but it supposes (obviously ignoring the article, where I explain the hundreds of hours that were already invested into securing the app) that security in the backend is being ignored. Do NOT overestimate your ability to make something safe. "Stupid mistakes" like the ones described in the article are present everywhere.
MFA makes no difference in this context, it is already enforced for all users. The SPA does checks the token on the server but the communication can be intercepted and changed. You can't see how spoofing the responses is related because, as your PR suggests, you did not understand what the problem is.
I should not update the post because, if you read it, you'll see there is both a disclaimer and a conclusion about that. You simply didn't read it.
Finally, as I have learned in other comments, what I "hacked together", as you call it, is a simplified version of JWT, which is industry standard, so at this point I can't understand your position. We hired experts and we trust them to say that this is a critical issue; as you said, this is not your job, but is theirs.
Can it be done with a public key, or do I need to send the secret to the front end?
Yes, a header, the content, and the signature, but you don't need to validate the signature to decode the content, you just need to parse it as base64. That is what got me confused.
Have fun reading: dev.to/victorwm/how-i-trivially-by...
Read the article, comments and even the simple repo, and still don't understand the point of all this.
First, not related to the security problem but to the implementation of this "fix" - So you basically did some form of JWT, why didn't you just use JWT protocol in the first place, like you said already have for authorizartion. Your server can send a signed JWT token (the payload of which can be whatever your server needs, it's not restricted to auth usecases only, like in this case JSON.stringify(responseData)). And your client can just decode/verify it. If the current user-hacker tries to change this JWT token or it's payload it will fail. This are 2 lines of code, one in server and one in client, using the right libs which apperantly you already use for the authentication part.
Second it's best to describe what your app is doing but what I figured it's smething like:
If this is the case and you (or your boses) think that you've "secured" it with what you've done then obviously no need anyone to convince you otherwise. If this is not the situation then just explain what at all you are trying to protect and then people will willingly be happy to provide guidance and help
I need to verify the signature on the client, and JWT verifies it on the server (at least, that is how I learned it). This doesn't help in this case, because the hacker can intercept any attempt to contact the server to validate the signature and fake the response saying it passed.
I came across the repository "jose js" recently and it seems there is something "like" what I did there, but I couldn't make the time to get to know it yet.
I can't disclose details about the application. But it is like a 360-evaluation tool, and people's final score is related to their bonus. If, by messing around, they find a way to modify their scores, this could impact their bonus.
The hackers reported this as a critical issue because of the profile of the potential attackers; employees with low tech skills and good incentives to mess around. Looking back, maybe I should have made it clearer on the article. I expected people to just "get it", but I guess I shouldn't. Lesson learned.
Many people have provided helpful guidance, and I gathered a lot of useful information to discuss with the team. We're fuzzing the API to battle test our validations, for example.
The JWT's payload can be verified anywhere, successfully decoding it is actually the verification, if the payload is tempered then decoding/parsging it will fail. It is most likely what you already do with the auth JWT , you receive from server JWT with lets say payload claims like "user:xxx", "admin:false", "prop:value", so client verifies it by successfully decoding it and sees "Aha, the payload say user:xxx, prop:value, ..." and so on. If someone doesn't matter who, man-in-the-middle or same user tempres it and tries to put "user:yyy", "admin:true" then the decoding will just not be possible. Read it more properly on jwt.io/ , I'm not native english speaker.
Thanks, I'll read, but as I understand, the decoding of a JWT is simply parsing it's content as base64, it would still need the secret to validate it, so that's why it happens on the backend... perhaps I'm missing something, so I'll look into it. It is possible that JWT accomplishes what I needed, but we simply didn't know at the time.
Thank you very much.
There's two main types of JWT, and inside those there's a selection of cryptographic cyphers you can use.
You can sign a JWT with an RSA private key on your backend and verify it using a public key on your frontend, or any on any API endpoint.
This type is JWS, And as you mentioned, this version is just base64 encoded data, but with exactly the sort of cryptographic signature you're after.
The other type is a JWE, and in this form the entire payload is not only signed but encrypted, so you cannot see the payload in flight.
Again, this can be decoded and verified on both the front and backend.
Cool. JWS seems to work like what I did. Could've saved me some time, but I still enjoyed building this as I learned a lot.
JWE I suppose the front would need to have the secret, so it wouldn't really help. But I guess it can be good for server to server communication?
Thanks for the info.
Both JWS and JWE can work either with PSK or public private keys.
It depends on the crypto chosen.
Using RSA or Eliptic curve would work with public private keys, just as your solution did. With these the front end would only need the public key to (decode JWEs &) verify the JWT.
Nothing about JWTs is limited to backend, it's just as applicable to frontend.
These are awesome suggestions, thank you very much.
The API has exponential throttling for the same IP or same user (it helped us check the DoS box). We log requests responded with 403 (forbidden). I'll talk to devops to see if they can set some sort of alert on it. Will definitelly be helpful.
Some actions are auditable and revertible. Not all, though, we can definitelly improve that.
Your third suggestion is excellent. We've been planning on integrating the app with the company's support platform, and having grants be handled by tickets flowing through a series of approvals. Gotta carefully secure that communication, though.
The last point is something we already do. Developers have no admin access in production.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.