DEV Community

Cover image for Hacking your application may be easier than you think
Omer Hamerman
Omer Hamerman

Posted on • Originally published at

Hacking your application may be easier than you think

I noticed a suspicious behavior on the weekly email from my coffee shop's subscription; it was offering I edit my preferences directly through a dedicated link. I was able to bypass the cookie and authentication token (no tricks) and was able to reach an account details panel changing password / account email etc. Essentially the shop was exposed to severe authentication and authorization issues, leading to IDOR of PII (exposure of private identifiable information). On top of that, not CORS nor CSRF mitigations were in place, allowing me to create a malicious link leading to a one-click account takeover.

Disclaimer: I am not suggesting anyone do any testing without clear consent. I noticed a vulnerability as a happy customer of one of my favorite shops and was intrigued by how easily my account could be manipulated. From there it was curiosity that took me further to find the bugs. I tried to keep as few "offensive" actions as possible and reported everything in detail to the shop to help them mitigate the risks, offering my help.

How it all started

I have a subscription to one of the best coffee shops in London. I'm being sent a bag of freshly roasted beans once a week, along with an email suggesting I change the preferences of beans or delivery schedule.
I have used the link many times; when I wanted to skip a delivery, or push it forward it was a quick and easy access to my account. No login barriers or interference - slick and easy customer experience.
For some reason, last week after updating my preference to an earlier shipment as I was out of coffee, I noticed something weird: accidentally removing one character of the link's token did not matter. I was still able to view the contents of my preference, make changes and update my account.

# Original request leading to the panel<user-hash>/?token=1111111

# Using a different token (any token or none at all for that matter)<user-hash>/?token=qqqqqqq
Enter fullscreen mode Exit fullscreen mode

A bit of background: the shop is a web application, based on a very well known e-commerce platform that uses Ruby (trying to avoid full disclosure assuming those misconfigurations / bugs are in the wild in many other shops). The website itself is not the slickest I've seen, but it gets the job done. Basic user management and a convenient way of changing my preferences of delivery in a very easy and customizable way.

Back to the story: as I was removing the token, not only I could reach my account's delivery preferences, I could reach it from an anonymous browser too. Essentially meaning the cookie verification is not in place either. So I stripped the GET request from its token and cookie and was able to grab the information from anywhere I want. This would be vulnerability number one: lack of authentication for this specific endpoint.

It's important to say, the bug is not visible to any customer. Changes to non-recurring accounts (without automated weekly recurring orders), and to unsubscribed users go through the authentication process. It was this endpoint in the link, which was set up to support subscribed paying customers with recurring orders, giving them quick access to changes. The endpoint looked something like this:<unique-hash>/?token=<token>
Enter fullscreen mode Exit fullscreen mode

At this point I tried shifting my thinking from a worried customer to an attacker - what would I do if I had malicious intentions here?

I found an edit link on the preferences menu that led me to a form where I could change my details: name, home address, phone_number, and sure enough - my email address(!). The reason I'm emphasizing the email address is that this is the field that usually leads to account takeovers. Personal information should never be leaked, but it isn't always considered a security bug, whereas having an unprotected email change, can potentially destroy an account. Email addresses are de-facto web application accounts identifiers. If you control an account's email address, you own it. Controlling the address can be used to reset the password and essentially taking control over the account.

An important note about email address protection. More precisely - email address update protection. Changing an account's email address should be protected by a password. Yes, even if the user is already authenticated and logged in. Exactly for the reason described above. APIs are constantly changing and most of them have or will be exposing something at some point. It's a common and good practice to place another layer of protection on password and email changes. Once these are protected, to ensure the user is "real", he would be asked to confirm the change in the personal mailbox. Whether these are password or email changes, we assume that control over both an email address and password is a good identifier for a user's authenticity.

Continuing my "journey" I made a request to change my email address adding +1 to the username so that I don't lose control over the account (a +1 is common practice to change an address or use multiple accounts with the same mailbox). Sure enough, the request went through and my account was updated. This means, that anyone with this "coffee preference update" link, could take over my account details and buy anything they want with my credit card.

So how do I exploit that? I tried to illustrate the risk to the shop owner by showing a POC. To utilize a CSRF attack I'd have to create some kind of HTML form and send it to customers so their cookie is added to the request. BUT - not only the form did not have a CSRF token in place, I recalled it was not being validated at all.
There were also no CORS headers in place. This means that anyone from anywhere can send a POST request to the shop's API using the recurring/customer/edit link to make changes to an account, given the hash. This is massive because usually CSRF is utilized through dangerous GET requests. To exploit a POST changing request one would have to chain the attack with a stored XSS (a form of code injection where in this case might present a malicious link in the application) of some kind for a user to view / click.

Check out my post on CSRF attacks and mitigation

Since the cookie had no role here, all of that did not matter. This is just to show that if it was, a cookie validation alone is not a bulletproof solution. It is a baseline to the different security layers that should be implemented on top.


I'm trying to educate myself on writing better vulnerability reports, so I cleanly disclosed everything I could to the shop owners. I assumed the technical details will be forwarded, so it was important to stress the impact and the risks the vulnerability imposes. This brings me to the "why it matters" part.


Why it matters!

The internet is massive, and it grows exponentially. With it, are growing both users and applications, and yes - bugs. As a developer, knowing common risks and how to test them, helps us as a community build better applications to support happier (and safer) customers. As in this case, no one intended any harm, but on the path to perfect a customer experience, developers and product owners should keep risks in mind together with the impact of their exploits.

As a smart woman once said:

Hackers are the internet's immune system.

While I can't call myself one, there are lots of high-quality bug bounty platforms out there with highly skilled researchers. I'd like to encourage teams with the resources and means to consider opening their program, private or with a platform. Leverage (and compensate) professional skills to spot security holes and fix them.

I do hope this post sparks the light in other minds to protect their customers. I also hope this can be a call-to-action for users to keep an open eye on how they are communicating with service providers they consider trustworthy. Lastly, I'd call for anyone who thinks they noticed something to go down the rabbit hole, and report. Worst-case scenario nothing happens. On the other hand, you may have saved information leaks, embezzlement and by extension - lives.

Testing yourself for vulnerabilities

Developers are often over-trusting security pipelines, QA systems and their own imported libraries. While these are all functional layers of the release process, a developer should know how to think and run basic tests against their own creation.

The thought process is something along the lines of:

  • What's the security mechanism that's preventing a malicious actor to make a certain request?
  • If I were a hacker (or a curious engineer with too much free time), how would I bypass it?
  • Can I simply not use a cookie? Can I remove the authentication header? Can I manipulate the cookie? Can I change my role just by manipulating the request's endpoint / headers / body / parameters?
  • Are there other creative-yet-obvious ways someone could use the system in a way that was not intended?

This is a partial list of questions that any developer should ask, especially when building and protecting user accounts. Added a new authentication library? Test it. Changed the cookie encryption? Added roles to your application management system? Test the changes, try to manipulate the system, it's not only fun but healthy. Take it a step further and dedicate half a day in a sprint for self-security "poking". I know that was a way I've sharpened my skills and once even found a way our system, built in house and serving clients, was compromise-able. But this is a different story for a different post.

Thank you for reading!

Top comments (0)