In today's digital landscape, web applications face an increasing number of requests per second (QPS) and connections per second (RPS). For organizations deploying self-hosted WAF solutions like SafeLine, understanding and optimizing performance under high concurrency is crucial to ensure both security and reliability.
Understanding QPS and RPS
- QPS (Queries Per Second) refers to the number of queries or HTTP requests processed by a system per second.
- RPS (Requests Per Second) generally measures the same metric, though sometimes it may include TCP-level requests rather than full HTTP queries.
In a high-concurrency environment, distinguishing between QPS and RPS helps engineers evaluate whether performance bottlenecks arise at the network, transport, or application layer.
SafeLine WAF Performance Reference
SafeLine's official testing shows that each CPU core can handle roughly 800 QPS in typical deployment scenarios. For example, a WAF node with 100 CPU cores could theoretically process ~80,000 QPS under ideal conditions. However, real-world deployments often see lower throughput due to:
- Network latency between load balancers (e.g., HAProxy) and the WAF
- SSL/TLS handshake and encryption processing
- Rule evaluation complexity
Testing High-Concurrency Scenarios
When performing QPS/RPS tests with SafeLine, consider the following:
-
Environment Setup
- Deploy SafeLine behind a load balancer.
- Ensure WAF nodes have sufficient CPU and memory resources.
-
Load Simulation
- Use tools like
ab
(ApacheBench),wrk
, orhey
to simulate concurrent requests. - Gradually increase concurrency and track the QPS/RPS to identify bottlenecks.
- Use tools like
-
Monitoring Metrics
- Track CPU and memory usage on WAF nodes.
- Observe latency and error rates from both the WAF and upstream applications.
- Monitor network throughput to ensure no packet loss occurs under load.
Optimization Tips
-
Leverage Multi-Core Processing
- Enable SafeLine's performance expansion mode on Pro licenses to utilize all CPU cores efficiently.
-
Tune Proxy and TCP Settings
- Adjust
proxy_connect_timeout
andworker_connections
in Nginx or HAProxy to handle high connection rates. - Use TCP keep-alive where appropriate to reduce handshake overhead.
- Adjust
-
Rule Optimization
- Avoid overly broad or complex WAF rules that require extensive computation per request.
- Utilize SafeLine's strict mode selectively for high-risk endpoints.
-
Distributed Deployment
- For massive traffic scenarios, deploy multiple WAF nodes in active–active or active–passive configurations.
- Note: each node requires its own Pro license.
-
Monitoring Tools
- Integrate external traffic monitoring tools to validate real-world throughput and latency.
- Compare observed QPS/RPS with theoretical expectations.
Conclusion
High-concurrency testing is essential for ensuring SafeLine WAF performs reliably under peak loads. By understanding QPS/RPS metrics, carefully monitoring system performance, and optimizing deployment parameters, organizations can maximize both security and throughput, ensuring seamless protection for all web applications.
Key Takeaway: Performance optimization in high-concurrency scenarios is a combination of adequate hardware, tuned system parameters, and well-planned WAF rule management. SafeLine provides flexible tools for administrators to balance security and scalability efficiently.
SafeLine Website: https://ly.safepoint.cloud/ShZAy9x
Live Demo: https://demo.waf.chaitin.com:9443/statistics
Discord: https://discord.gg/dy3JT7dkmY
Doc: https://docs.waf.chaitin.com/en/home
https://docs.waf.chaitin.com/en/GetStarted/Deploy
Github: https://github.com/chaitin/SafeLine
Top comments (0)