For the first two days of my rebuild, everything I wrote in Python faced outward.
My scripts interacted with networks. They talked to other machines, asked questions, and interpreted replies. Ports opened, services responded, banners revealed versions. It felt like exploration, almost like mapping a coastline from a boat.
But today I realized something uncomfortable.
When real security incidents happen, nobody cares about open ports.
They care about history.
Not “what services exist?”
But “what happened here?”
And most computers are terrible historians.
They do millions of operations every minute, yet when a breach occurs, administrators often face a blank wall. The system is running, files exist, users can log in… and yet there is a persistent feeling that something already went wrong hours or days earlier.
The problem isn’t detection.
The problem is memory.
So today I stopped building a scanner and started building a witness.
The moment that changed my direction
If you read breach reports, they rarely begin with a sophisticated exploit. The beginning is usually mundane.
A document arrives in email.
A user downloads a “patch”.
A compressed archive is extracted.
A script is run.
That’s it.
Before lateral movement, before persistence, before data exfiltration, there is a very quiet moment when a machine changes state in a simple way:
A file that did not exist a second ago now exists… and it can be executed.
That single moment is incredibly important.
Because after execution, the attacker can:
- create new users
- modify startup services
- install backdoors
- clean traces
But the arrival of the first runnable file is much harder to hide. It is the earliest physical footprint on the system.
I decided to make my Python program notice exactly that.
The design idea
Instead of periodically scanning the system, I wanted the operating system to notify my script whenever something changed.
Modern operating systems actually support this. They expose filesystem events. When a directory changes, a program can subscribe and receive notifications immediately.
So my program doesn’t “check” folders.
It listens.
I used a filesystem observer that watches a specific path (for example the Downloads directory). This location is important. Attackers rarely drop their initial payload into protected system directories. They rely on writable places where normal users save files.
The program simply waits.
No loops reading files.
No constant disk scanning.
No high CPU usage.
It behaves like a sleeping sensor.
Turning activity into a security event
When the OS reports a new file, the script performs a small investigation.
It gathers context:
Who is the user currently logged in?
What is the full path?
What is the filename?
Then comes the key question:
Can this file run?
There are two ways a file becomes runnable:
- Its extension suggests an executable script or binary
- The operating system marks it with execute permission
Most files on a computer cannot run. Images, text files, PDFs, music, and videos are data. A runnable file is different. It is an instruction set the operating system will obey.
So instead of logging every file, the program classifies.
A photo download is just recorded as a normal creation.
A runnable file becomes a warning event.
The script then writes a structured log entry to a dedicated file.
Not a paragraph. Not a sentence.
A record.
2026-02-16 22:41:03 | WARNING | NEW_EXECUTABLE | user=hfz | path=/home/hfz/Downloads/update.sh
That line is more valuable than it looks. It contains time, identity, and location. In incident response, those three things reconstruct timelines.
The bug that taught the real lesson
While testing, I created a script file manually:
I created the file first, then made it executable.
The logger stayed silent.
At first I thought my program was broken.
But it wasn’t broken. It was naive.
It was watching only the instant of creation. When the file was born, it wasn’t executable yet. Seconds later, permissions changed and the file became runnable, but my script wasn’t watching that transition.
This revealed something deeper than a coding mistake.
Security monitoring is not about observing isolated events.
It is about observing state changes over time.
A harmless file can become dangerous later. An attack is not a single action. It is a sequence of small, individually innocent operations that together form malicious behavior.
That realization shifted the project from “log file creation” to “watch system behavior”.
What this resembles in the real world
I initially thought I was writing a small utility.
But I unintentionally reproduced the core idea behind endpoint detection systems.
Modern security agents rarely depend on known malware signatures. They observe behavior:
- processes starting in unusual locations
- files becoming executable in user directories
- unexpected privilege usage
- persistence modifications
They don’t just search for bad programs.
They watch the story a machine is telling.
My script currently observes only one part of that story: the first executable foothold.
Yet even this small piece matters.
If an incident occurs tomorrow, the system can answer:
“Here is the exact minute a runnable file first appeared.”
Without that, investigators guess. With it, they build timelines.
Why this project mattered to me
I started this 30-day rebuild to relearn Python syntax. Functions, modules, subprocess handling. I expected technical exercises.
Instead, today felt closer to learning how investigators think.
Cybersecurity isn’t only about preventing attacks. Prevention fails eventually. Patches are missed, users make mistakes, and attackers are patient.
The real difference between chaos and recovery is visibility.
A computer that cannot describe its past cannot be trusted in the present.
This tiny logger doesn’t block malware. It doesn’t quarantine files. It doesn’t stop intrusions.
What it does is quieter and more fundamental.
It gives the machine a memory.
And once a system can remember, an incident stops being a mystery and becomes a timeline.


Top comments (0)