💡
This page summarizes how to install and use stress, plus common parameters and practical examples. It is meant as a quick reference for performance testing and bottleneck investigation.
Part 1: Overview
Basic Info
-
Tool name:
stress - What it is: a Linux command-line stress testing tool that simulates high load by spawning worker processes.
- What it can stress: CPU, memory (VM), disk I/O, and mixed workloads.
- Source: CSDN: stress usage guide
Official docs and resources
- Official documentation: (to be added)
- GitHub repository: (to be added)
Part 2: How it works
Key features
- CPU stress: spawn compute-heavy workers.
- Memory stress: allocate, touch, and optionally keep memory.
-
I/O stress: call
sync()to generate disk I/O pressure. - Mixed stress: combine CPU + memory + I/O to approximate real high-load scenarios.
Typical use cases:
- Validating system stability under pressure
- Comparing performance before and after tuning
- Finding resource bottlenecks with monitoring tools (
top,iostat,vmstat, etc.)
Core idea
stress launches multiple worker processes. Each worker type corresponds to a resource dimension:
- CPU worker
- VM (memory) worker
- IO worker
- HDD worker
A key parameter, --vm-stride, changes the memory write stride, which can affect Copy-On-Write behavior and shift CPU time between user space (us) and kernel space (sy).
Part 3: Installation and usage
Requirements
- Linux (CentOS, Ubuntu, etc.)
- Installation permissions (sudo)
Install on CentOS 7 (EPEL)
sudo yum install epel-release
sudo yum install stress
stress --version
Install on Ubuntu
sudo apt install stress
stress --version
Basic syntax
stress <options>
Part 4: Common options (with examples)
1) CPU stress
-
-c, --cpu N: startNCPU workers. -
--backoff N: delay new forked processes byNmicroseconds before they start.
Example: start 4 CPU workers
stress -c 4
Monitoring tip: use top to observe per-process CPU usage.
2) Memory stress
-
-m, --vm N: startNmemory workers. -
--vm-bytes B: memory size per worker (for example300M).
Example: 2 workers, 300MB each
stress -m 2 --vm-bytes 300M
Useful memory parameters:
-
--vm-keep: keep memory allocated (instead of allocate/free loops).
stress --vm 2 --vm-bytes 300M --vm-keep
-
--vm-hang N: sleepNseconds after allocation before freeing, then repeat.
stress --vm 2 --vm-bytes 300M --vm-hang 5
-
--vm-stride B: set memory write stride (can change COW frequency and CPU behavior).
stress --vm 2 --vm-bytes 500M --vm-stride 64
--vm-stride and CPU us vs sy
- Small stride (for example 64 bytes): denser writes, more frequent COW, often higher user time (us).
stress --vm 2 --vm-bytes 500M --vm-stride 64
- Large stride (for example 1M): less frequent COW, but may increase kernel memory-management overhead, often higher system time (sy).
stress --vm 2 --vm-bytes 500M --vm-stride 1M
- Default stride: roughly 4096 bytes (4KB) if not specified.
stress --vm 2 --vm-bytes 500M
Quick reference:
--vm-stride |
CPU tendency | Memory behavior | When to use |
|---|---|---|---|
| Small (e.g., 64) | Higher us | Frequent COW, dense writes | Simulate compute-heavy memory operations |
| Medium (e.g., 4K) | Balanced | Default-ish behavior | General memory stress testing |
| Large (e.g., 1M) | Higher sy | Less COW, more memory management overhead | Simulate kernel memory-management pressure |
Notes:
- COW (Copy-On-Write): pages are copied only when written.
- us/sy: CPU time spent in user space vs kernel space.
3) I/O stress
-
-i, --io N: startNI/O workers. Each callssync()to flush buffers to disk.
Example: start 4 I/O workers
stress -i 4
Monitoring tip (disk I/O):
iostat -x 2
Key metrics:
-
%util: device utilization, near 100% means saturated -
r/s,w/s: reads/writes per second -
rkB/s,wkB/s: throughput -
await: average I/O latency -
--hdd-bytes B: size of data written by HDD workers.
stress -i 1 --hdd-bytes 10M
4) Mixed workload
Example: CPU + memory + I/O + disk write
stress --cpu 4 --io 4 --vm 2 --vm-bytes 100M --vm-keep --hdd-bytes 10M
Meaning:
- 4 CPU workers
- 4 I/O workers
- 2 VM workers (100MB each) and keep the memory
- write 10MB of data for disk pressure
5) Other handy options
-
-t, --timeout N: run forNseconds
stress -c 4 -t 60
-
-v, --verbose: verbose output
stress -c 4 -v
-
-q, --quiet: quiet mode
stress -c 4 -q
-
-n, --dry-run: print what would run, without actually stressing
stress -c 4 -n
Part 5: Monitoring recommendations
During stress tests, it is best to watch system metrics in parallel:
-
top: CPU, memory, and per-process usage -
iostat: disk throughput and latency -
vmstat: memory and overall system behavior
Practical tip: run the stress command and monitoring commands in separate terminals so you can compare load with metric changes.
Summary
Key takeaways
-
stressis a simple, fast way to create CPU, memory, and I/O pressure. -
--vm-stridecan significantly change memory write patterns and the CPUus/sysplit. - Stress testing is most valuable when paired with monitoring.
Recommendation
- Rating: ⭐⭐⭐⭐☆ (4/5)
- Why: easy to use, parameters are straightforward, good for quickly building pressure scenarios.
- Not ideal when: you need realistic end-to-end business traffic and request chains (use a dedicated load-testing platform/tool instead).
Top comments (0)