DEV Community

Cover image for Incident Triage Without Context Switching: Bash, zoxide, PowerShell, and Win-CLI
arnostorg
arnostorg

Posted on • Originally published at windows-cli.arnost.org

Incident Triage Without Context Switching: Bash, zoxide, PowerShell, and Win-CLI

Incident Triage Without Context Switching: Bash, zoxide, PowerShell, and Win-CLI

Every incident has two clocks running: the production clock and your cognitive clock. Most teams obsess over the first and ignore the second. But in real outages, you don’t lose minutes because grep is slow. You lose minutes because you bounce between shells, paths, output formats, and half-remembered commands.

My take: incident triage should feel like one continuous workflow, even if you touch Bash, PowerShell, and classic Windows command-line tools in the same session. You can keep momentum if each tool has a clear job and you avoid unnecessary switching costs.

This is the workflow I use: jump fast, verify files, filter signal, inspect processes, and preserve artifacts.

A triage loop that works across shells

When an alert lands, I don’t start with “which shell is best?” I start with “what is the fastest route to signal?”

  1. Jump to the right project directory immediately (zoxide).
  2. Confirm relevant files exist before checking processes.
  3. Filter log signal quickly (Bash pipeline mindset).
  4. Inspect runtime state with readable output (PowerShell projection or Win-CLI text filters).
  5. Copy evidence files before cleanup or restart steps.

The point is continuity: same mental model, different command surface.

1) Use zoxide to kill navigation overhead

If you still burn time on deep cd chains during incidents, that’s pure waste. zoxide gives you muscle-memory navigation based on frequency and recency. It’s one of the highest ROI upgrades I’ve made for on-call work.

zoxide --version
zoxide add ~/projects/windows-command-shell
zoxide query windows
z windows
Enter fullscreen mode Exit fullscreen mode

zoxide add seeds important paths ahead of time. zoxide query tells you where a jump will land. z windows gets you there fast. During incidents, this is the difference between staying in flow and fumbling through path history.

I also like that this stays practical: you can keep your existing shell habits and just remove friction where it hurts most.

2) Use Bash pipes to extract incident signal fast

For raw text logs, Bash remains unbeatable for quick slicing. The key principle is simple and still underrated: each command does one job.

cat events.log | grep warn
cat events.log | grep warn | wc -l
Enter fullscreen mode Exit fullscreen mode

That’s often enough to answer two critical triage questions:

  • What warnings are present right now?
  • How many warning lines are we dealing with?

Pipes let each command do one job well. This composability is core to Bash productivity and keeps complex workflows understandable.

A common mistake is counting the wrong stream. If you run wc -l on unfiltered input, you get total line count, not warning count. In incidents, that mistake can inflate severity and send your team down the wrong branch.

3) Use Win-CLI for quick host and file operations

In Windows-heavy environments, classic commands are still very useful for immediate checks, especially on older hosts or constrained sessions.

dir /a
copy report.txt backup\report.txt
tasklist | findstr powershell
schtasks /query /fo table
Enter fullscreen mode Exit fullscreen mode

What this gives you fast:

  • dir /a: hidden/system-inclusive view when expected files “disappear.”
  • copy ...: quick backup of an artifact before experimenting.
  • tasklist | findstr powershell: lightweight process presence check.
  • schtasks /query /fo table: scheduled-task visibility in human-readable form.

Is this elegant? Not always. Is it practical during an incident bridge? Absolutely.

4) Use PowerShell when structured output matters

Bash shines for text streams. PowerShell shines when objects and projection matter. In triage, readable output is speed.

Get-ChildItem | Where-Object {$_.Extension -eq '.log'} | Select-Object Name
Get-Process | Where-Object {$_.Name -eq 'pwsh'} | Select-Object Name, Memory
Enter fullscreen mode Exit fullscreen mode

Two habits here pay off every time:

  1. Check files first, process second. Running process checks before confirming logs exist can mislead the investigation.
  2. Always project (Select-Object) for scanability. Raw objects are noisy. In incidents, noisy output is latency.

My rule: if I can’t visually parse the output in two seconds, the command is not done.

Troubleshooting anecdote: a fake “warning storm”

A real example from a late-night triage: I saw an alert tied to warning volume and quickly ran a line count on the log. The number looked huge, and for five minutes we treated it like a major warning spike.

Observable symptom: terminal count showed a massive “warn” volume, but dashboard trend didn’t match.

The problem was embarrassingly simple: I had counted all lines in the file in one check, then mentally mapped that to warnings. I fixed it by forcing the pipeline order and making the filtered count explicit:

cat events.log | grep warn | wc -l
Enter fullscreen mode Exit fullscreen mode

Then I validated file context before process checks:

Get-ChildItem | Where-Object {$_.Extension -eq '.log'} | Select-Object Name
Enter fullscreen mode Exit fullscreen mode

Once we aligned filtered counts with actual files, severity dropped from “possible flood” to “localized warning burst.” That saved us from a noisy rollback discussion and moved us back to focused remediation.

The lesson was not “use Bash” or “use PowerShell.” The lesson was: sequence matters more than shell loyalty.

Train this workflow in short reps

If you want this to feel automatic under pressure, practice the exact command patterns in a guided environment:

Related reading from the same site:

What I've already written about

  • Fast Navigation and Safer File Moves: Bash, zoxide, PowerShell, and Win-CLI

References

Top comments (0)