DEV Community

Cover image for Fraudulent Resource Consumption Attacks and a Gatekeeper Solution
Raj Madisetti
Raj Madisetti

Posted on

Fraudulent Resource Consumption Attacks and a Gatekeeper Solution

Hello cyber enthusiasts and professionals,

Today, I will be presenting the persistent threat of Fraudulent Resource Consumption (FRC) attacks and a proposed Gatekeeper solution below.

Problem

Fraudulent Resource Consumption (FRC) attacks are a stealthy, yet prevalent threat to Cloud Service Providers with a goal to exploit unattended vulnerabilities and deplete CSP resources. These attacks aim to take advantage of the pay-per-use algorithm that most Cloud Service Providers such as Amazon Web Services and Microsoft Azure use. FRC attacks involve an attacker covertly gaining access to an unsuspecting Cloud user’s account and setting up automated fraudulent resource requests (botnet) in order to siphon network resources for personal gain or malicious intent. Damages to CSP’s are based on the utility pricing model, the attacker’s skill level, and motivation.

FRC attacks are extremely important issues to address for Cloud users and Cloud Service Providers alike. They can critically disrupt organizational operations by dominating bandwidth and storage which can significantly slow or shut down Cloud servers. If servers are impacted heavily, it can lead to serious financial losses along with legal implications if contracts with private businesses are involved. Oftentimes, these attacks can also serve as distractions to lure attention away from more probing security threats such as data theft and network infiltration. Therefore, we should be diligent to implement a quick and methodical solution to Fraudulent Resource Consumption attacks in order to completely remove the problem.

gatekeeper diagram

Solution

This blog aims to introduce an effective solution to this FRC problem using a Gatekeeper medium in order to filter user requests to a Cloud service. This Gatekeeper can be
used as another form of authentication in order to sanitize each Cloud request to verify its source and priority. If a user cannot be verified, its requests will be assigned to the lowest possible priority and will be severely limited as to not incur any significant FRC costs. In essence, we will allow normal and verified traffic to pass through efficiently through the
Gatekeeper while clamping down on questionable Cloud requests. This will, in theory, eliminate the entire threat of FRC attacks in our Cloud model.

Experimental Approach

Normal request traffic, along with a simulated FRC (Fraudulent Resource Consumption) attack, was sent to an endpoint with and without a gatekeeper mechanism. The simulation involved ten trials of sending ten minutes of normal traffic at 200 requests per minute, five minutes of elevated traffic at 300 requests per minute, and another ten minutes of normal traffic to invoke Lambda functions, both with and without the Gatekeeper. The graphs were analyzed in Amazon Web Services (AWS). Both graphs showed noticeable peaks during the middle five minutes. The effectiveness of the Gatekeeper was measured by the reduction in the average requests per minute (RPM) of that peak with a clearly defined start and end time. The experiment is considered successful if the Gatekeeper reduces the average RPM of the peak by 70% throughout the ten trials.

gatekeeper diagram

The Gatekeeper proof of concept (POC) is a Python application with a client and server side. The client side sends requests with a payload (function name with user priority, region, and path) to the server on behalf of a predetermined number of users. The server side processes the requests from the client and executes them based on function priority. If a user has a low priority, their requests will be limited and will take longer to execute. This mechanism reduces the FRC peak for the Gatekeeper. A Python algorithm was used to send normal and FRC traffic to a Lambda function endpoint in AWS. CloudWatch Analytics was used to produce accurate line graphs to track function invocations as a metric for requests per minute.

Metrics for Evaluation

To evaluate and analyze the performance we can consider the following metrics:

  1. Requests per minute (RPM): Measures the rate of requests being sent.
  2. Response time: Time taken to get a response from the server.
  3. Success rate: Number of successful responses versus the total number of requests.
  4. Error rate: Number of failed requests.
  5. Rate limit hits: Number of times the requests are rate-limited.
  6. Retry count: Number of times requests are retried due to rate limiting.
  7. Latency: Time delay between sending a request and receiving a response.
  8. System load: CPU, memory, and network usage on the server handling the requests.

Top comments (0)