I recently found a tweet linked in a Discord server I frequent:
Practical Pentest Labs@ppentestlabsWe don't allow users to pick passwords so that we don't store any of your sensitive information.
Instead, passwords are randomly generated by the system and they need to be stores in plaintext so that we can send you the reminder in case you forgot it.
We hope this makes sense. twitter.com/morthalhys/sta…11:02 AM - 06 Dec 2019M💀rtalhys @mortalhysReally, @ppentestlabs? You're storing passwords in clear text? Asked for a password reminder and got sent the password that I've stored in my password safe. @PWTooStrong https://t.co/X5UTeCCd4t
I initially thought about it as a very similar situation to various other cases of plaintext password storage that I had seen before - Application stores passwords in plaintext, automated system sends plaintext password back to user via email for account recovery, developers get publicly humiliated for not putting enough effort into account security.
However, this situation is a bit different, due to the fact that passwords are actually generated by the application, and not supplied by the user. I want to make it clear that this doesn't mean that this application is "in the right" (in fact, they have definitely done many wrong things here), but it does provide for some interesting opportunities to think about threat models in account security.
To establish a baseline to talk about this, let's first describe what an "ideal" setup for account security looks like.
The obvious #1 rule that should be followed when designing or coding a login system is "Never store passwords in plaintext", or put another way, "Always hash your passwords". This is because, if your application storage is compromised, hashes are not useful to attackers, while plaintext passwords are. Salting is an extra step that should also be taken to prevent two users from having the same hash, even if they have the same password. And finally, the hashing algorithm should be an accepted standard (like bcrypt), and the application should use an externally tested library to perform the hash.
If you are not familiar with the reasoning behind this (or the definitions of hashing/salting), this video is a great introduction:
Normally, instances of plaintext password storage have similar threat models:
An attacker must have at least read-access to the system storing account information.
The attacker can then see passwords in plaintext, allowing them to know the actual given password of any standard user.
The attacker can then authenticate to the application as any user, and do whatever operations that user can normally do (e.g. see account information, perform actions as that user, etc).
If privileged accounts (e.g. administrators) are stored/authenticated against in the same way, then the attacker can authenticate to the application as the privileged account and perform privileged operations (e.g. see system information, make wide-ranging changes to the application, set up further backdoors, etc).
An attacker can use the same password for a user on other services to attempt to authorize as the user for operations that the user can normally take there. The attacker does not have to know that the user reuses passwords - they can instead just take a "spray and pray" approach to see if they can get into any accounts on other services, especially if they are targeting a specific user.
If the attacker is able to authenticate as the user in an email application (from the previous point), then they can potentially gain access into any application that uses that email account as an account recovery method. Thus, if the user uses the same password for app A (the original target app) and their email, but not for app B (another app), then the user could still authenticate to app B via account recovery abuse.
These effects are all at least partially mitigated by users not reusing passwords, and using multi-factor authentication whenever available.
So, what's different with Practical Pentest Labs? The original tweet and subsequent explanations indicate several factors that PPL believes mitigates the vulnerability:
Users' passwords are randomly generated upon account creation, instead of being supplied by users.
No payment information is stored, as they use Paypal for payment.
The password is only used to authenticate to taking labs in the application.
I want to focus on the first item above, as it's certainly the most interesting. Number two, in the grand scheme of things, isn't very important. Obviously it's good that they're not storing payment information if users can potentially authenticate as other users, but the threat model I described above shows other issues than just "I can read someone else's payment information". And it certainly doesn't mean that storing passwords in plaintext is okay if it's so easy to do it securely via hashes and salts. Number three is practically nonexistent, and I'm not sure why it's been given as a mitigating factor - obviously the password is going to be used to authenticate so that you can do what you're supposed to do in the app.
The first mitigating factor above brings with it a very important factor for the threat model: Users cannot pick their passwords; therefore they cannot pick the same password that they use for another service.
This effectively eliminates several points from the threat model above:
The attacker cannot authenticate to another application that the user uses the same password on (since they won't use the same password on it).
The attacker cannot abuse account recovery in other applications by authenticating to a user's email account (since they won't use the same password between PPL and their email).
This effectively means that any "damage" caused by this vulnerability is solely restricted in scope to the target application (PPL). I took a look at the PPL website and made an account, and it doesn't exactly look like mitigating factor 3 above is 100% accurate - there's also a forum on the site. So let's look at all the actions that one could take from another user's account:
Log into the dashboard as the other user (By itself this is inconsequential).
Authenticate to the PPL VPN as the other user (using the VPN configuration file available from the dashboard).
Access free labs as the other user (Though this would be free to access in your own account, so this is inconsequential).
Access paid labs as the other user, if the other user has paid for them.
Start the free trial for paid labs as the other user, if the other user has not paid for them (Same caveat as the free labs).
Post to the forums as the other user (Including potentially getting the other user banned if you post the right things to the forums).
I should also note that there is an account called "admin" that seems to be treated as an account in its own right for the forums, so if its credentials are stored in the same way, then one could at least post to the forums as the administrator, and potentially also perform administrative operations on the forums (like setting a thread to "sticky") if those options are available in the application after authenticating as "admin".
Thus, it looks like the three worst possible threats from this model are:
Perform forum operations as "admin" (If the account is indeed treated the same way as other accounts, but given more privileges in the application).
Get another user banned, potentially after they have spent $43 on a VIP membership.
Access the $43 labs without paying $43 (Which doesn't negatively impact the target user, but does negatively impact PPL).
The given reason for the design choice in question is "so that we don't store any of your sensitive information". They elaborate later:
Practical Pentest Labs@ppentestlabsWe would rather store a system generated string in our database than allow you to input yours and store it in an encrypted format.
If we encrypted the password that would mean we do have your password, it's just encrypted and there is a chance even 0.01% for it to be decrypted.13:59 PM - 06 Dec 2019
First off, I want to fully refute the idea of a "0.01% chance for it to be decrypted". Using proper hashing algorithms with good password requirements will require times to crack on the order of tens-to-hundreds of years with multi-GPU cracking rigs. And if you're properly salting hashes, this is the time needed to get into one particular account - you can't compare the same hash against multiple entries in the table, because those entries are generated from different salts. If this were anyone other than a company in the infosec domain, I would maybe give them a pass on this, but using "but you could theoretically crack bcrypt given enough time" as an argument against hashing passwords is blatantly dishonest, and really one of the few aspects of this case that genuinely makes me mad.
Secondly, I want to talk about the idea of a user's chosen password as "sensitive information". You could make a case for this along the lines of users reusing passwords, but at that point it's fully within the user's control whether they do that or not, which makes this seem like a really odd argument.
Honestly, I don't really see any merit in the actual reasons they supplied for doing this (And to be perfectly honest, it's likely that the actual reasons for this were either laziness, or the devs not knowing very much about account security and just trying to get a paid service online ASAP). It's definitely less of an impact than if they stored user-supplied passwords in plaintext, but you can still just handle accounts correctly, and not even have any of this be an issue in the first place. These are the options:
Hash/salt user-supplied passwords - No practical chance of getting a user's supplied password from the hash, no way to authenticate as the other user even in the target app.
Store random passwords in plaintext - Can authenticate as the other user in the target app only.
Store user-supplied passwords in plaintext - Can authenticate as the other user in the target app and any apps for which they reuse their password
When PPL says "Okay, well we're doing number two, which is better than number three", they're completely ignoring the perfectly good number one that makes everything perfect. And when this is an infosec training tool, there is no excuse for getting this wrong.