DEV Community

Shafayet Sadi
Shafayet Sadi

Posted on • Originally published at shafayetsadi.dev on

Understanding CPU Time (Real, User, System)

Introduction

CPU time (or process time) is the amount of time that a central processing unit (CPU) was used for processing instructions of a computer program or operating system. — Wikipedia

When I was working on a sandboxed code execution engine for one of my projects, I stumbled upon the concept of CPU time. I needed a way to measure how long a program was actually running, similar to how platforms such as Codeforces or LeetCode do. At first, I thought this was as simple as measuring the elapsed time with a stopwatch in a docker container. But after digging deeper, I realized there are different ways to measure time, depending on what exactly you want to capture.

Later, while studying Operating Systems, I found out this distinction between different types of time is fundamental. When we run a program with the time command, we don’t just get one number — we get three:

  • real → the total elapsed (wall-clock) time
  • user → the CPU time spent executing your code in user mode
  • sys → the CPU time spent in the kernel on behalf of your process

Together, the CPU time consumed by your program is the sum of user and sys.

Well, that's the TL;DR! You can shoo! now.


A Simple Example

#include <iostream>
#include <fstream>
#include <unistd.h>

int main() {
    // Heavy computation (CPU-bound loop)
    volatile double x = 0.0;
    for (long i = 0; i < 800000000; i++) {
        x += i * 0.000001;
    }

    // Lots of file operations (kernel calls)
    for (int i = 0; i < 800000; i++) {
        std::ofstream ofs("/tmp/testfile", std::ios::app);
        ofs << "Hello World\n";
    }

    // Sleep
    sleep(3);

    std::cout << "Done! Result = " << x << std::endl;
    return 0;
}
Enter fullscreen mode Exit fullscreen mode

When we compile and run this program with /usr/bin/time, we get:

➜ cputime g++ cputime.cpp && /usr/bin/time -f "-----\nReal: %e\nUser: %U\nSys : %S\n-----" ./a.out
Done! Result = 3.2e+11

---
Real: 7.34
User: 2.69
Sys : 1.67
---
Enter fullscreen mode Exit fullscreen mode
Timeline (scaled to 7.34s real time)

Real Time: |=================================================|  7.34s
User Time: |==================                               |  2.69s
Sys  Time: |==========                                       |  1.67s
Waiting  :                             |=====================|  ~2.98s (sleep + I/O wait)
Enter fullscreen mode Exit fullscreen mode

Here, the program took about 7.34 seconds of real time, but only 2.69 seconds of user CPU time and 1.67 seconds of system CPU time. The rest of the time was spent waiting — in this case, mostly because of the sleep(3) call.

Note

A quick clarification: the GNU /usr/bin/time command and the shell built‑in time are not the same thing.

  • The GNU version (/usr/bin/time) supports useful options like -f (custom output format) and -v (verbose statistics).
  • The shell built‑in is more limited.
  • To be safe, always call it with the full path:
/usr/bin/time -v ./my_program
Enter fullscreen mode Exit fullscreen mode

Caution

The example program appends to /tmp/testfile 800,000 times.
This can quickly grow the file, consume disk space, and also skew your timing results (due to disk caching and write‑back).

To avoid issues:

  • Remove /tmp/testfile after running the program.
  • Reduce the number of iterations.
  • Or, for testing purposes, write to a “black hole” like /dev/null instead of a real file.

Real Time

The first number, real time, is the total elapsed time from when the program starts until it finishes. It includes everything: the time the CPU spends executing instructions, the time spent waiting for I/O, the time lost to context switches, and even the time the process is idle but waiting for resources.

We can think of real time as the time we’d measure with a stopwatch. If we run a program and it finishes in five seconds, then the real time is five seconds — regardless of how much actual CPU work was done during that period.


User CPU Time

The second number, user time, is the amount of CPU time spent executing our program’s own instructions in user mode. This is the time the CPU spends running our code directly — like loops, function calls, arithmetic operations, data processing, and so on.

If our program is doing heavy calculations, like matrix multiplications or number crunching, the user time will be high. In our example, the large loop at the beginning contributes to the user time.


System CPU Time

The third number, system time, is the amount of CPU time the operating system spends in kernel mode on behalf of our program. This includes things like:

  • File I/O
  • Memory allocation
  • Page faults
  • Network communication
  • System calls such as open(), read(), write().

In our example, the repeated file writes contribute heavily to the system time. Even though the program itself is just calling ofstream << "Hello World", under the hood the kernel is doing the actual work of writing to disk.


Putting It Together

So how do these numbers relate?

  • real time is the total elapsed time.
  • user time is the CPU time spent in user mode.
  • system time is the CPU time spent in kernel mode.

For a single-threaded program on an idle system, real time is usually greater than or equal to user + system time, because real time also includes waiting. So,

  • Real Time >= User Time + System Time

But on a multi-core system, user + system can actually exceed real time. For example, if we run eight CPU-bound threads for two seconds on an eight-core CPU, the real time might be around two seconds, but the user time could be close to sixteen seconds (two seconds per thread, summed across cores). Therefore,

  • Logical Cores Used * Real Time >= User Time + System Time

How the OS Accounts CPU Time

At this point, you might be wondering: how does the operating system actually know how much time was spent in user mode versus kernel mode?

The answer lies in a combination of hardware timers and kernel bookkeeping. Modern CPUs fire periodic timer interrupts. Each time this happens, the kernel checks which process was running and whether it was in user mode or kernel mode. It then charges that slice of time to the appropriate counter.

Whenever a context switch occurs — for example, when the scheduler moves the CPU from one process to another — the kernel also records how much CPU time the outgoing process consumed since it was last scheduled.

Over time, these small measurements add up. The kernel maintains per‑thread statistics, which are then aggregated into per‑process totals. This is what tools like time, getrusage, or ps report back to us.

On Linux, we can even peek under the hood yourself:

  • /proc/<pid>/stat contains raw counters for user and system time (in “jiffies”).
  • /proc/<pid>/task/*/stat shows the same breakdown per thread.

Note

This explanation is a very simplified overview. To fully understand how CPU time is accounted, you’d need to dive much deeper into operating system internals and kernel source code.

That involves studying topics like scheduler design, timer interrupts, context switching, and per‑CPU accounting — areas usually covered in advanced OS courses.


Interpreting Patterns

The difference between real, user, and system time can tell us a lot about our program’s behavior:

  • If real time is much larger than user + system, our program is probably I/O bound — waiting on disk, network, or locks.
  • If user time dominates, our program is CPU bound in user space, and we might need to optimize our algorithms or parallelize.
  • If system time is unusually high, our program is making many system calls or doing lots of small I/O operations.
  • If user + system is much larger than real, our program is effectively using multiple cores in parallel.

Why This Matters

Understanding CPU time is more than just an academic exercise. It helps us answer practical questions:

  • Is our program slow because it’s waiting on I/O, or because it’s burning CPU cycles?
  • Should we focus on optimizing algorithms, or on reducing system calls and I/O overhead?
  • Are we actually getting good parallel utilization from our threads?

These distinctions are crucial when profiling performance, debugging slowdowns, or designing efficient systems.


Closing Thoughts

This whole idea really clicked for me the first time I saw user + sys way bigger than real while running some multi‑threaded code in my sandbox engine. At first I thought, “Wait, how can CPU time be more than the actual time?” — and that’s when I figured out: Wall time isn’t the same as CPU time.

So next time you run time ./program, don’t just look at the “real” line and move on. Take a peek at the user and sys times too — they might surprise you, and they’ll definitely tell you more about what your code is really doing. 🖥️


Related Project

I first stumbled upon CPU time while building my sandboxed code execution engine.
If you’re curious, you can check out the source code here:

Also, I would invite you to visit my personal site:


References


Top comments (0)