DEV Community

Cover image for Training Under Fire: How To Drill OT Operators For Cognitive Overload
M Ali Khan
M Ali Khan

Posted on

Training Under Fire: How To Drill OT Operators For Cognitive Overload

Most OT training is built for a world that does not exist.
People sit in a classroom, stare at slides about past incidents, walk through procedures, sign an attendance sheet, and go back to a control room where alarms stack up, systems misbehave, phones will not stop, and production pressure never lets up.
The result is simple. Operators are trained for calm, but they live in chaos.
If you want operators to perform when the system is noisy, information is incomplete, and time pressure is real, you cannot train them only in quiet, clean conditions. You have to train them under fire.
This is not a nice idea or a soft skill project. It is a core part of OT cybersecurity. Cognitive overload during an incident is a direct risk to safety, production, and security. Ignoring it because it feels “too human” is a mistake.

The Problem With Classroom-Only Training

Classroom training is not useless. People do need concepts, awareness, and shared language. But it fails badly when it is the only mode of preparation.

In a classroom, there is no alarm noise. There is no conflicting information. There is no real-time pressure. There are no competing demands from production, maintenance, or management. Everyone nods, understands the theory, and gives the “correct” answers.

Then a real incident hits. Alerts from different systems arrive out of order. People are already tired from earlier tasks. Production wants answers now. The tools do not behave exactly the way the slides suggested. Half the shortcuts people actually use are not in any procedure.
The gap between the classroom and reality shows itself very fast. Under stress, the brain does not reach for what was on slide fourteen. It falls back on habits.

If you never trained those habits in realistic conditions, you do not have an incident response capability. You have a binder and a false sense of security.

What Realistic OT Cyber Drills Actually Look Like

Real training for cognitive overload does not live in a meeting room. It lives in the same environment where people actually work: the control room or a realistic replica that behaves like it.

A serious drill is not, “We pretend there is an incident and talk about it.” It has teeth. It uses the same tools, screens, and workflows that operators use on a normal shift.

A realistic drill should have:
Real screens and real tools, not just printed scenarios and email prompts. Actual alerts in the interface, not imaginary ones read out loud. Conflicting or incomplete data that forces judgment, not multiple-choice answers. Time pressure that participants feel in their body, not just a timer written on a whiteboard.

You can start small. For example, inject a simulated security alert into the same tool operators normally use. At the same time, trigger one or two minor process alarms that would usually be handled without much thought. Then have someone call or message with a routine request during the drill.
Now tell operators to handle it exactly as they would on a regular shift. Do not over-explain, do not guide them step by step. Watch what actually happens. Where they hesitate. Who do they call? Which window do they ignore? You will learn more from those ten minutes than from any quiz.

What You Should Really Measure In a Drill

Most organisations still score drills on checklists.
Did someone declare an incident? Did we send the notification mail? Did we follow the documented steps?

This looks tidy, but it misses the point. When you are training for cognitive overload, you care about how humans cope with pressure and noise, not just whether a form was filled.

More useful measures are things like time to notice the key alert among other distractions. Time to understand what is really happening, not just read the message text. Time to take the first meaningful action that reduces risk instead of just talking about it. The number of times people made wrong assumptions and had to backtrack. How often operators hesitated because they did not trust the tools in front of them.

You also want to capture where people got stuck. Was it unclear who should decide? Did they doubt the data? Did they have to dig through shared drives and documents to find the next step?

Those are not character flaws in your staff. Those are design problems in your process, tooling, and training. If your best operator gets lost in your own documentation during a drill, the documentation is the problem.

Drills That Target Cognitive Overload Directly

If you want to build real resilience, you need scenarios that stress the brain in the same way real incidents do. Not just big dramatic failures, but messy, distracting, half-broken situations where attention is constantly pulled in different directions.

Useful drill patterns include scenarios like multiple low-level alarms plus one critical signal. You trigger several minor alerts that are common in your environment, then inject one rare but serious event that does not look dramatic at first glance. The question is simple. Does it get lost in the noise?

You can run a partial automation failure drill by simulating the loss of part of your monitoring or logging. Force operators to rely more on process behaviour, other systems, or manual checks instead of a single consolidated dashboard. See how they adapt when automation does not hand them a neat picture.

Conflicting data across systems is another powerful pattern. One system says everything is fine, another shows suspicious activity, and a third is slow or intermittently unavailable. Watch how long it takes for someone to question the tools instead of waiting for perfect clarity that never comes.

A shift change in the middle of an incident is one of the most revealing scenarios you can run. Start a drill shortly before a scheduled handover. Watch how information is transferred, what gets lost, and how ownership is handled. Does the new shift truly understand the state of the incident, or do they quietly restart the analysis from scratch?
These are not games or theatrics. They are controlled ways to surface how humans behave when their attention is stretched, and the system is not cooperating.

Building Operator Confidence, Not Fear

Poorly run drills create blame and fear. People feel tested and judged, so they play it safe, say as little as possible, and try not to stand out. That kills learning. It also kills honesty, because nobody wants to admit what they did not see or did not understand.
Good drills do the opposite. They build confidence by making hard situations familiar. They normalise the fact that confusion and uncertainty are part of real incidents, and give people a safe space to work through that.

The difference is in how you run the review afterwards.
Instead of asking, “Who made this mistake?” you ask, What made this decision hard. What information was missing at that moment? Where did the tools confuse you or slow you down? Which alerts felt like noise and which felt real? Where did handovers or role boundaries get fuzzy?

Then you prove that you listened. You refine alerts, tighten roles, adjust procedures, and improve interfaces based on that feedback. You remove steps that nobody can realistically follow under pressure. You fix logins, permissions, and tool quirks that tripped people up.
When operators see that drills lead to better systems, not just criticism, they start giving you the truth instead of the polished version. That is where real improvement starts.

Making Drills a Habit, Not a Special Event

One big exercise a year looks impressive in reports, but it does not create muscle memory. People forget. Teams change. Systems move on.
A better model is to run short, focused scenarios every month and one larger integrated drill once or twice a year. You keep a simple log of what was learned and what changed after each one, so improvements do not vanish into meeting notes.

A monthly drill can be as basic as a single injected alert and a ten-minute response. The goal is repetition, not theatre. You want people to regularly practice noticing, prioritising, and acting under load.
Over time, you track trends. Are operators faster at spotting the real signal? Do they need fewer escalations for the same type of situation? Are there fewer hesitations about who should act? Are there still steps in procedures that nobody follows because they are impossible in real time?
That is how drills turn from a compliance tick box into a living part of your OT cyber defence.

**Training For the World You Actually Live In

**

In OT, incidents do not wait for calm moments. They arrive during maintenance, during production crunch, during the night when people are tired, during shift changes, during audits, and sometimes during all of those at once.
If you only train people in comfortable, quiet sessions, you are not preparing them. You are rehearsing a fantasy.
Training under fire means teaching operators to handle alerts when their attention is already fragmented. Teaching them to act when tools are noisy, partial, or a little broken. Teaching them to trust their judgment and use the process safely when automation is uncertain or conflicting.
If you do that regularly, a real incident will not feel like the first time under pressure. It will feel like a harder version of something they have already survived in a drill.
You cannot remove cognitive overload from OT. But you can teach people to think clearly inside it. That is not a human resources initiative. It is cybersecurity.

About the author
Muhammad Ali Khan ICS/ OT Cybersecurity Specialist - AAISM | CISSP | CISA | CISM | CEH | ISO27001 LI | CHFI | CGEIT | CDCP

Top comments (0)