Real World Analogy
Example A
Let’s consider this scenario:
- A bank account has $100.
- Two threads try to withdraw money at the same time.
- Thread 1 checks the balance (sees $100) and withdraws $45.
- Before Thread 1 updates the balance, Thread 2 also checks the balance (incorrectly sees $100) and withdraws $35.
We cannot be 100% certain which thread will get to update the remaining balance first; however, let’s assume that it is Thread 1. Thread 1 will set the remaining balance to $55. Afterwards, Thread 2 might set the remaining balance to $65 if not appropriately handled. (Thread 2 calculated that $65 should remain in the account after the withdrawal because the balance was $100 when Thread 2 checked it.) In other words, the user made two withdrawals, but the account balance was deducted only for the second one because Thread 2 said so!
Example B
Let’s consider another scenario:
- A bank account has $75.
- Two threads try to withdraw money at the same time.
- Thread 1 checks the balance (sees $75) and withdraws $50.
- Before Thread 1 updates the balance, Thread 2 checks the balance (incorrectly sees $75) and withdraws $50.
Thread 2 will proceed with the withdrawal, although such a transaction should have been declined.
Examples A and B demonstrate a Time-of-Check to Time-of-Use (TOCTOU) vulnerability.
Example Code
Consider the following Python code with two threads simulating a task completion by 10% increments.
import threading
import time
def increase_by_10():
for i in range(1, 11):
print(f"Thread {threading.current_thread().name}: {i}0% complete")
# Create two threads
thread1 = threading.Thread(target=increase_by_10, name="Thread-1")
thread2 = threading.Thread(target=increase_by_10, name="Thread-2")
# Start the threads
thread1.start()
thread2.start()
# Wait for both threads to finish
thread1.join()
thread2.join()
print("Both threads have finished completely.")
Running this program multiple times will lead to different results. In the first attempt, Thread-2 reached 100 first; however, in the second attempt, Thread-2 reached 100 second. We have no control over the output. If the security of our application relies on one thread finishing before the other, then we need to set mechanisms in place to ensure proper protection.
Causes of Race Condition
- Parallel Execution: Web servers handle multiple requests simultaneously. Without proper synchronization, shared resources may be accessed or modified incorrectly, leading to race conditions.
- Database Operations: Concurrent read-modify-write operations can cause data inconsistencies. Proper locking mechanisms and transaction isolation help prevent conflicts.
- Third-Party Libraries and Services: External APIs and libraries may not be designed for concurrent access, leading to unpredictable behavior when multiple requests interact with them simultaneously.
Web Application Architecture
Client-Server Model
- Client: Initiates requests for services (e.g., a web browser requesting a web page).
- Server: Responds to client requests, processing them and sending back the required resource.
- The client-server model operates over a network, where requests and responses are exchanged.
Typical Web Application Architecture
A multi-tier architecture separates application logic into different layers:
- Presentation Tier: The client-side interface, typically a web browser rendering HTML, CSS, and JavaScript.
- Application Tier: Handles business logic, processes client requests, and interacts with the data tier. Implemented using server-side languages like PHP or Node.js.
- Data Tier: Manages data storage and retrieval, using databases like MySQL and PostgreSQL.
Program States
To explain this, we have an example presented, being Validating coupon codes and applying discounts.
The flowchart above shows the rough sketch of code flow within the application.
The above code leads to a few program states:
- Coupon not applied
- Coupon applied
However, real scenarios are rarely this simple. Below is a more realistic scenario with more states.
Importance in Race Conditions
- A time window exists between initiating an action (e.g., applying a coupon) and marking it as completed.
- During this window, no controls may prevent repeated actions, allowing multiple applications of the same coupon.
- This concept applies to money transfers as well:
- The system checks balance and limits before confirming the transaction.
- Even if brief, the processing time is not zero, leaving room for race conditions.
- Attackers can exploit these delays to execute multiple actions before the system updates the state, leading to unintended financial or logical inconsistencies.
However, even if the web application is vulnerable, we still have one challenge to overcome: timing. Even in vulnerable applications, this “window of opportunity” is relatively short; therefore, exploiting it necessitates that our requests reach the server simultaneously. In practice, we aim to get our repeated requests to reach the server only milliseconds apart.
Exploiting Race Conditions in a Credit Transfer Web Application
Analyzing HTTP Requests with Burp Suite
- Use Burp Suite Proxy to intercept and examine HTTP requests.
- Identify POST requests for credit transfers, noting parameters like the recipient’s number and amount.
- Observe the system’s responses to valid and invalid transfer attempts.
Using Burp Suite Repeater
- Send intercepted
POST
requests to Repeater for further testing.
- Duplicate the request multiple times (e.g., 20 times).
- Use group sending options to test race condition exploitation.
Exploiting Race Conditions
Send Requests in Sequence
- Single Connection: Sends all requests over one connection before closing.
-
Separate Connections: Opens a new TCP connection per request.
- Some requests succeeded, while later ones were rejected.
- Successful requests took ~3 seconds, while rejected ones were processed in milliseconds.
Send Requests in Parallel
- All requests were sent simultaneously within 0.5 milliseconds.
- All 21 requests were successful, leading to multiple unauthorized credit transfers.
- More TCP packets were used due to synchronization mechanisms.
HTTP Synchronization Techniques
- HTTP/2+: Sends multiple requests within a single TCP packet.
- HTTP/1 (Last-Byte Sync): Holds the last byte of each request until all requests are sent, ensuring simultaneous execution. This is used in Sending Requests in Parallel. That's why more packets are sent compared to in Sequence.
Wireshark for Sending Messages in Sequence over Separate Connections
Wireshark for Sending Messages in Parallel
By leveraging race conditions, attackers can execute multiple unauthorized transactions before the system updates the account balance.
Detection
Detecting race conditions can be difficult for business owners since minor abuses, such as redeeming a gift card multiple times, may go unnoticed. To effectively identify these vulnerabilities:
- Regularly analyze logs for suspicious patterns.
- Utilize penetration testers and bug bounty programs to discover vulnerabilities.
- Understand system constraints, such as "use once," "vote once," or "limit to balance."
- Use tools like Burp Suite Repeater to test for exploitable time windows.
Mitigation
1. Synchronization Mechanisms
- Implement locks to ensure only one process/thread accesses a shared resource at a time.
2. Atomic Operations
- Use indivisible execution units that prevent race conditions by ensuring operations complete without interruption.
3. Database Transactions
- Utilize transactional integrity to ensure all database modifications succeed or fail as a group, preventing inconsistencies.
By implementing these mitigation techniques, businesses can reduce the risk of race conditions and secure their systems against exploitation.
Top comments (0)