DEV Community

Discussion on: Is Cooperative Concurrency Here to Stay?

Collapse
 
zhu48 profile image
Zuodian Hu

Microseconds sounds right. I wrote a context switcher in ARM in college, and know the switching itself tends to be pretty run of the mill as performance goes, just moving stuff between memory locations, so it's mostly memory bound. However, a context switch implies a scheduler run, and then you're additionally incurring scheduler run time on the CPU. The scheduler then has to access the data structure the kernel keeps to track schedulable entities (threads in Linux, I think).

Now, you're switching to a kernel context and filling the L1 with kernel memory, then run the scheduler, then switch to the new user context and filling the L1 with the new user memory.

Just thinking about it this way, you're easily going to hit microseconds on a typical context switch.