DEV Community

shemkar
shemkar

Posted on

One Extra JSON Key: How a Harmless Profile Endpoint Became an ATO Candidate

The harmless profile endpoint that taught me how real bugs work

Early in my bug bounty journey, I found a bug that looked simple from the outside, but it changed the way I think about web security.

At that time, I didn’t know how to write strong reports.
I knew how to test manually.
I knew how to follow weird backend behavior.
But I didn’t fully understand that finding a bug is only half of the work.

The other half is proving impact clearly.

The bug started from a normal profile update endpoint.

Nothing fancy.
No admin panel.
No scanner.
No magic.

Just this kind of request:

Test target https://example.com

PATCH /api/users/me HTTP/2
Host: example.com
Content-Type: application/json
Cookie: session=<redacted>

{
  "time_zone": "Africa/Casablanca"
}
Enter fullscreen mode Exit fullscreen mode

At first, this endpoint looked harmless.

It was supposed to update normal user-controlled profile fields like timezone, language, or basic settings.

But I asked myself one simple question:

What happens if the backend receives fields that the frontend never sends?

So I added sensitive parameters manually:

PATCH /api/users/me HTTP/2
Host: example.com
Content-Type: application/json
Cookie: session=<redacted>

{
  "time_zone": "Africa/Casablanca",
  "email": "attacker@example.com",
  "is_instructor": true,
  "role": "admin"
}
Enter fullscreen mode Exit fullscreen mode

A secure backend should never allow a normal user to control fields like:

{
  "email": "attacker@example.com",
  "is_instructor": true,
  "role": "admin"
}
Enter fullscreen mode Exit fullscreen mode

The frontend is not security.

Even if the UI never sends role, is_instructor, or sensitive account fields, an attacker can still add them manually in Burp Suite, curl, or any HTTP client.

The real security decision must happen on the backend.

In my case, the server accepted the request.

HTTP/2 200 OK
Content-Type: application/json
Enter fullscreen mode Exit fullscreen mode

Back then, I was excited because I saw 200 OK.

But later I learned one of the most important lessons in bug bounty:

A 200 OK is not impact.
Persistence is impact.
A protected action unlocked by that persistence is impact.

So the real questions are not:

“Did the server return 200?”

The real questions are:

  • Did the email actually change?
  • Was password confirmation bypassed?
  • Was email verification bypassed?
  • Did the role change only in the UI, or did it affect authorization?
  • Could the attacker use password reset after changing the email?
  • Could the user access instructor/admin-only functionality after modifying role-related fields?

That is where a harmless profile endpoint can become dangerous.

This pattern is called Mass Assignment.

It happens when the backend maps incoming JSON fields directly into a user object or database model without a strict allowlist.

The developer expects something like:

{
  "time_zone": "Africa/Casablanca"
}
Enter fullscreen mode Exit fullscreen mode

But the backend also accepts:

{
  "time_zone": "Africa/Casablanca",
  "email": "attacker@example.com",
  "is_instructor": true,
  "role": "admin"
}
Enter fullscreen mode Exit fullscreen mode

The problem is not that the attacker can send extra fields.

Anyone can send extra fields.

The problem is when the backend trusts them.

For the Account Takeover angle, the proof should be:

1. Attacker has temporary access to an authenticated session.
2. Attacker changes the account email through the API.
3. No password confirmation is required.
4. No old email verification is required.
5. Attacker logs out.
6. Attacker uses "Forgot Password".
7. Password reset link goes to the attacker-controlled email.
8. Victim loses control of the account.
Enter fullscreen mode Exit fullscreen mode

For the Privilege Escalation angle, the proof must be stronger than just saying "role":"admin" was accepted.

The important proof is:

GET /api/instructor/dashboard HTTP/2
Host: example.com
Cookie: session=<regular_user_session>
Enter fullscreen mode Exit fullscreen mode

Before the attack:

HTTP/2 403 Forbidden
Enter fullscreen mode Exit fullscreen mode

Then send:

PATCH /api/users/me HTTP/2
Host: example.com
Content-Type: application/json
Cookie: session=<regular_user_session>

{
  "is_instructor": true,
  "role": "admin"
}
Enter fullscreen mode Exit fullscreen mode

After the attack:

GET /api/instructor/dashboard HTTP/2
Host: example.com
Cookie: session=<same_regular_user_session>
Enter fullscreen mode Exit fullscreen mode

If the result becomes:

HTTP/2 200 OK
Enter fullscreen mode Exit fullscreen mode

That is real impact.

Not the field.
Not the response.
The action unlocked by the field.

That was the mistake I made in the beginning.

I focused too much on the parameter modification, and not enough on the final proof chain.

I thought:

“The role changed. The email changed. This is obviously serious.”

But triagers and developers do not work with emotion.

They need a clean chain:

Before → blocked
Exploit → accepted
After → protected action unlocked
Enter fullscreen mode Exit fullscreen mode

This bug pushed me to improve.

It taught me that business logic bugs are not about sending random payloads.

They are about understanding trust.

Who controls this field?
Who should control this field?
What changes if this value is accepted?
What can I do after the backend persists it?

That mindset changed how I hunt.

Now, when I see an endpoint like:

PATCH /api/users/me
Enter fullscreen mode Exit fullscreen mode

I don’t only test the fields visible in the UI.

I test the trust boundary.

I ask:

  • Does the backend use an allowlist?
  • Does it ignore unexpected fields?
  • Can I modify email without re-authentication?
  • Can I modify role-like fields?
  • Can I modify verification flags?
  • Can I modify account type, plan, permissions, or security settings?
  • Does the value persist server-side?
  • Does it unlock a real action?

For developers, the fix is not “hide the field from the frontend”.

The fix must be server-side.

A secure backend should:

  • Use a strict allowlist of editable fields.
  • Reject or ignore sensitive fields like role, is_admin, is_instructor, email_verified, permissions, plan, balance, account_type.
  • Require password confirmation or re-authentication for sensitive account changes.
  • Require email verification before replacing the primary email.
  • Never trust frontend-controlled authorization fields.
  • Log sensitive account modifications.
  • Add rate limits and security challenges for repeated email changes.

A secure response should look like this:

PATCH /api/users/me HTTP/2
Host: example.com
Content-Type: application/json

{
  "time_zone": "Africa/Casablanca",
  "role": "admin"
}
Enter fullscreen mode Exit fullscreen mode

Expected secure behavior:

HTTP/2 400 Bad Request
Enter fullscreen mode Exit fullscreen mode

Or:

{
  "error": "role is not an editable field"
}
Enter fullscreen mode Exit fullscreen mode

This bug did not pay me.

But it paid me in experience.

It taught me that a boring endpoint can hide:

  • Mass Assignment
  • Missing re-authentication
  • Email change bypass
  • Account takeover chains
  • Weak authorization logic
  • Privilege escalation
  • Dangerous backend trust assumptions

It also taught me that a weak report can kill a strong bug.

Today, I try to prove impact better.

Not just:

{
  "role": "admin"
}
Enter fullscreen mode Exit fullscreen mode

But:

What changed?
Where did it persist?
What security control was bypassed?
What protected action became possible?
How can the developer reproduce it in 2 minutes?
Enter fullscreen mode Exit fullscreen mode

That is the difference between guessing and understanding.

No target names.
No private data.
No drama.

Just one lesson:

Sometimes the most dangerous bugs are not hidden in complex exploits.
Sometimes they are hidden in one extra JSON key.

Top comments (0)