DEV Community

Cover image for PC_Workman 1.6.8: When "Quick Fix" Took 3 Weeks (Data Engine + AI Context + 70% Performance)
Marcin Firmuga
Marcin Firmuga

Posted on

PC_Workman 1.6.8: When "Quick Fix" Took 3 Weeks (Data Engine + AI Context + 70% Performance)

PC_Workman 1.6.8: When "Quick Fix" Became the Biggest Update Since Alpha

How I rebuilt the data engine, added AI context awareness, gained 70% performance, and learned that scope creep isn't always bad


// Marcin Firmuga | Solo Developer | HCK_Labs

Version 1.6.8 wasn't supposed to be this big.

The plan was simple: fix the temporary chart system that couldn't show historical data. Maybe two days of work. Update the changelog. Push to testers. Done.

Three weeks later, I'd rebuilt the entire data aggregation engine, added context-aware AI that learns your patterns, optimized performance by 70%, fixed two critical bugs, polished the UI, and deleted 500+ lines of dead code.

This is that story. Technical deep-dive, honest struggles, lessons learned. The works.


The Problem That Started Everything

PC_Workman alpha shipped in January. Seven testers across different hardware. Bug reports coming in. Most were small - UI glitches, missing labels, the usual polish stuff.

Then one tester asked: "Can I see what happened yesterday? Like, what was eating CPU at 3 PM?"

No. The app couldn't do that.

Why? Because I'd built the chart system using temporary in-memory data structures. Great for showing "right now." Terrible for "what about three hours ago."

The charts were literally fake. Placeholder data that looked good in screenshots but had no historical persistence. Classic MVP shortcut that became a blocker.

I could've dismissed it. "Alpha feature, coming later." But the request made sense. A system monitor that can't look back in time isn't really monitoring - it's just displaying the present.

So I decided to fix it. "Two days," I told myself. "Build proper data persistence, hook it to the charts, done."

I was wrong about the timeline. But right about needing to fix it.


Building HCK_STATS_ENGINE: The Data Foundation

The core problem: how do you store months of system telemetry without killing the database?

PC_Workman collects data every second. CPU usage, RAM, GPU, temperatures, processes, everything. That's 86,400 data points per day per metric. Multiply by 10-20 metrics. That's nearly 2 million rows per day if you're naive about it.

SQLite can handle that, but querying becomes slow. And I was building this on a 2014 laptop that hits 94°C under load. Can't afford bloat.

The Aggregation Pipeline

The solution: in-memory accumulation + tiered aggregation.

Here's how it works:

Level 1: In-Memory Accumulator

# Structure: {(hour_timestamp, process_name): {metrics}}
accumulator = {
    (1738368000, "chrome.exe"): {
        "cpu_sum": 245.3,      # Total CPU seconds
        "cpu_max": 45.2,       # Peak usage
        "ram_sum": 1024000,    # Total RAM bytes
        "count": 3600,         # Sample count
        "active_secs": 3543    # Seconds active
    }
}
Enter fullscreen mode Exit fullscreen mode

Every second, accumulate_second() updates the dict. Lightweight. Just dict operations. No I/O.

Level 2: Hourly Flush

def flush_hourly_processes():
    # At hour boundary (00:00, 01:00, etc.)
    # Write accumulator → process_hourly_stats table
    # Clear accumulator
    # If crash mid-write → transaction rollback, data safe
Enter fullscreen mode Exit fullscreen mode

Level 3: Daily Aggregation

def aggregate_daily_processes():
    # At day boundary (midnight)
    # Sum hourly stats → process_daily_stats
    # Keep hourly data for 7 days
    # Keep daily data forever (or until disk full)
Enter fullscreen mode Exit fullscreen mode

Result: Real historical data. Queryable. Fast. Safe.

Stability First

The engine runs on the scheduler thread. Separate from UI. Every operation wrapped in try/except. If SQLite fails (disk full, permission issue, cosmic ray), the app doesn't crash. It falls back to CSV logging like before.

try:
    db.execute(query, params)
    db.commit()
except sqlite3.Error as e:
    logger.error(f"Stats engine failed: {e}")
    fallback_to_csv(data)  # App still works
Enter fullscreen mode Exit fullscreen mode

Why this matters: System monitoring tools can't afford to crash. If your monitoring crashes, you're blind. So we built defense in depth:

  • WAL mode (write-ahead logging) - concurrent reads/writes
  • Atomic transactions (crash-safe)
  • Graceful degradation (SQLite fail → CSV works)
  • No new dependencies (sqlite3 is Python stdlib)

Files added:

  • hck_stats_engine/constants.py - config, table schemas
  • hck_stats_engine/db_manager.py - connection pooling, migrations
  • hck_stats_engine/aggregator.py - core aggregation logic
  • hck_stats_engine/process_aggregator.py - process-specific stats
  • hck_stats_engine/query_api.py - read interface for UI
  • hck_stats_engine/events.py - anomaly detection hooks

Six new files. ~1,200 lines of code. But now the charts show real data going back weeks.


Making AI Actually Intelligent

While building the data engine, I kept opening hck_GPT (the AI assistant panel) for testing. Every time, it greeted me the same way:

"Hello! I'm hck_GPT, your PC companion. How can I help?"

Static. Boring. Useless.

I'd just implemented a system that knows everything about the computer's behavior. CPU spikes. Heavy apps. Gaming sessions. Work patterns. And the AI assistant was ignoring all of it.

That felt wrong. So I fixed it.

Context Awareness: InsightsEngine

The breakthrough was realizing: timing matters more than data.

It's not enough to know Chrome used 40% CPU. You need to know:

  • When? (During work hours? Gaming time?)
  • Context? (Is this normal for this time of day?)
  • Pattern? (Does this happen every Tuesday at 3 PM?)
  • Action? (Should I suggest closing tabs? Or is this expected?)

Enter InsightsEngine - 300 lines of pattern recognition and contextual intelligence.

What it does:

1. Time-aware greetings:

def get_greeting(self):
    hour = datetime.now().hour

    if hour < 12:
        return "Morning. Coffee loaded? CPU looking good at 8%."
    elif hour < 17:
        return "Afternoon. RAM at 52%, up from morning. Chrome again?"
    else:
        # Evening - check for gaming patterns
        if self._detect_game_pattern():
            return "Evening. Ready for another round of Battlefield? GPU warmed up."
        return "Evening. System's quiet. 12% CPU, 45% RAM."
Enter fullscreen mode Exit fullscreen mode

2. Real-time spike detection:

def detect_spike(self, metric, current_value):
    # Get 7-day baseline
    baseline = self._get_baseline(metric)
    threshold = baseline * 1.5  # 50% above normal

    if current_value > threshold:
        # Find cause
        heavy_process = self._get_top_process(metric)
        return f"{metric.upper()} SPIKE: {current_value}% (normal: {baseline}%). {heavy_process} is the cause."
Enter fullscreen mode Exit fullscreen mode

3. Process categorization:

CATEGORIES = {
    "gaming": ["battlefield", "cod", "cyberpunk", "steam"],
    "browser": ["chrome", "firefox", "edge"],
    "dev_tools": ["vscode", "pycharm", "docker", "node"]
}

def categorize_process(name):
    for category, keywords in CATEGORIES.items():
        if any(kw in name.lower() for kw in keywords):
            return category
    return "system"
Enter fullscreen mode Exit fullscreen mode

4. Pattern learning (7-day habit tracking):

def get_teaser(self):
    # Evening gaming pattern detected
    if self._is_evening() and self._played_game_last_3_days("battlefield"):
        return "Ready for another round of Battlefield? GPU's warmed up."

    # Morning work pattern
    if self._is_morning() and self._opened_vscode_last_5_days():
        return "VS Code from yesterday still open. Pick up where you left off?"
Enter fullscreen mode Exit fullscreen mode

UI Integration

The panel now:

Opens with context:

  • Morning: mentions coffee, shows overnight stats
  • Afternoon: references morning activity
  • Evening: suggests gaming if pattern detected

Dynamic banner:

  • Normal: "CPU 12% | RAM 45% | All quiet"
  • Spike: "CPU 87% SPIKE | Chrome (3.2GB RAM)"
  • Gaming: "Battlefield running | GPU 78%"

New commands:

  • stats - 7-day habit summary
  • alerts - anomaly report (24h)
  • insights - what's notable right now
  • teaser - proactive suggestion based on patterns

Color-coded reports (directly in chat, no separate window):

  • CPU values: red
  • GPU values: blue
  • RAM values: yellow
  • Category badges: [Gaming] red, [Browser] blue, [Development] green
  • Alert status: yellow banner

The Difference

Before:

User: "What's using CPU?"

hck_GPT: "Chrome is using 42% CPU."

After:

[Auto-greeting at 7 PM]

hck_GPT: "Evening. Battlefield pattern detected - last 3 nights, 7-10 PM. GPU ready at 45°C. RAM at 54%, up from afternoon (Chrome still open with 15 tabs)."

That's intelligence. Not just answering questions. Anticipating needs based on learned patterns.


The Performance Crisis Nobody Saw

While testing the new data engine and AI context, I noticed something embarrassing:

PC_Workman was using 15-20% CPU.

A system monitoring tool. That monitors CPU usage. Was itself eating 15-20% CPU.

The irony wasn't lost on me.

Finding the Bottlenecks

I profiled the app. The culprits:

1. Main loop cadence: 300ms

  • UI updated every 0.3 seconds
  • 200 updates/minute
  • Unnecessary for human perception

Solution: 1000ms (1 second)

  • Still smooth
  • 70% less overhead
  • Nobody notices the difference

2. Widget destruction hell:

# Old way (every 2 seconds):
for widget in top5_panel.winfo_children():
    widget.destroy()  # Expensive!

for process in top_processes:
    create_new_widget(process)  # More expensive!
Enter fullscreen mode Exit fullscreen mode

Destroying and recreating 5 widgets every 2 seconds = memory fragmentation + CPU waste.

Solution: Widget pooling

# New way:
if not self.process_widgets:
    # First time: create 5 reusable widgets
    self.process_widgets = [create_widget() for _ in range(5)]

# Every update: just update text
for widget, process in zip(self.process_widgets, top_processes):
    widget.config(text=process.name, cpu=process.cpu)
Enter fullscreen mode Exit fullscreen mode

Reuse > recreate. Performance improved by orders of magnitude.

3. Redundant syscalls:

# Old way (every update):
cpu_count = psutil.cpu_count()  # Why call this every second?
total_ram = psutil.virtual_memory().total  # This never changes!
Enter fullscreen mode Exit fullscreen mode

Hardware doesn't change at runtime. Cache it once at startup.

# New way:
class HardwareConstants:
    CPU_COUNT = psutil.cpu_count()  # Once
    TOTAL_RAM = psutil.virtual_memory().total  # Once

    @classmethod
    def get_cpu_percent_per_core(cls):
        return 100 / cls.CPU_COUNT
Enter fullscreen mode Exit fullscreen mode

4. Chart rendering nightmare:

The old chart system used PhotoImage pixel manipulation:

# Old: ~70,000 iterations per frame
for x in range(width):
    for y in range(height):
        if should_draw_pixel(x, y):
            img.put(color, (x, y))  # Slowwww
Enter fullscreen mode Exit fullscreen mode

Solution: Canvas object reuse

# New: Create objects once
self.chart_lines = [
    canvas.create_line(0, 0, 0, 0, fill="red", width=2)
    for _ in range(num_datapoints)
]

# Update: Just move coordinates
for i, (line, point) in enumerate(zip(self.chart_lines, datapoints)):
    canvas.coords(line, x1, y1, x2, y2)  # Fast!
Enter fullscreen mode Exit fullscreen mode

Orders of magnitude faster. Literally.

5. UI thread blocking:

Heavy telemetry collection (iterating 200+ processes) was blocking the UI thread.

Solution: Background daemon

# Startup: Launch background thread
telemetry_thread = threading.Thread(
    target=collect_telemetry_loop,
    daemon=True  # Dies with main app
)
telemetry_thread.start()

# UI thread: Just read snapshot (non-blocking)
def update_ui():
    snapshot = telemetry.read_snapshot()  # Instant
    render_data(snapshot)
Enter fullscreen mode Exit fullscreen mode

No more drag lag. No more freezes.

Results

Before:

  • Main loop: 300ms
  • CPU usage: 15-20%
  • Chart updates: laggy
  • Widget updates: memory spikes

After:

  • Main loop: 1000ms (70% reduction)
  • CPU usage: 3-5% (75% improvement)
  • Chart updates: smooth
  • Widget updates: stable memory

The lesson: Building on a dying 94°C laptop forced optimization thinking. Constraints = innovation.


Bug Fixes Nobody Sees (But Everyone Benefits From)

Two critical bugs fixed in 1.6.8. Not sexy. But important.

Bug A: Data Aggregation Overlap

Problem: get_summary_stats() was double-counting data.

When computing "lifetime uptime," it would:

  1. Sum daily stats (past days)
  2. Sum hourly stats (today)
  3. Sum minute stats (current hour)

But hourly stats include minute stats. And daily stats include hourly stats. Double-counting everywhere.

Result: Lifetime uptime showed 847 hours when real time was 423 hours.

Fix: Multi-tier fallback with overlap prevention:

def get_summary_stats():
    # Layer 1: Daily (complete days only)
    daily_sum = sum(daily_stats[:-1])  # Exclude today

    # Layer 2: Hourly (today, complete hours only)
    hourly_sum = sum(hourly_stats[:-1])  # Exclude current hour

    # Layer 3: Minute (current hour, real-time)
    minute_sum = sum(minute_stats)

    # No overlap: daily + hourly + minute = accurate
    return daily_sum + hourly_sum + minute_sum
Enter fullscreen mode Exit fullscreen mode

Now lifetime uptime is accurate from the first session.

Bug B: System Noise Pollution

Problem: System processes triggering false alerts.

"System Idle Process" would show 800% CPU (8 cores * 100% idle). "Memory Compression" would spike to 90% RAM during compaction. "Interrupts" would randomly hit 1000% (kernel threads).

These aren't problems. They're normal system behavior. But InsightsEngine was treating them as anomalies.

Fix: Strict filtering at multiple layers:

Layer 1: Telemetry (process_aggregator.py)

SYSTEM_NOISE = [
    "system idle process",
    "system interrupts", 
    "memory compression",
    "registry",
    "dwm.exe"  # Desktop Window Manager
]

def is_system_noise(process_name):
    return any(noise in process_name.lower() 
               for noise in SYSTEM_NOISE)
Enter fullscreen mode Exit fullscreen mode

Layer 2: Insights (InsightsEngine)

def _is_system_noise(self, process_name):
    # Additional filter for user-facing alerts
    if is_system_noise(process_name):
        return True

    # Cap at 100% per process (prevent overflow display)
    if process_cpu > 100:
        process_cpu = min(process_cpu, 100)

    return False
Enter fullscreen mode Exit fullscreen mode

Result: Clean telemetry. No false alerts. Accurate health checks.


Polish: The Details That Matter

Big features get headlines. Polish gets ignored. But polish is what makes software feel professional vs amateur.

UI Refinements

Info Section redesign:

  • Height: 50px (compact, not cramped)
  • Accent color: #a78bfa (purple, not blue)
  • Font: Consolas (mono, technical readability)
  • Typewriter animation: 70ms typing, 3-char burst deletion, longer hold time

Why Consolas? Monospaced fonts make numbers align. "CPU: 8%" lines up with "RAM: 64%" visually. Small detail. Huge readability improvement.

Dashboard-only updates:

def _update_hardware_cards(self):
    if self.current_view != "dashboard":
        return  # Don't update if not visible
Enter fullscreen mode Exit fullscreen mode

Eliminates errors from updating widgets that don't exist. Defensive coding.

winfo_exists() guards everywhere:

def _update_widget(self, widget):
    if not widget.winfo_exists():
        return  # Widget destroyed, skip update
Enter fullscreen mode Exit fullscreen mode

Prevents crashes when closing panels mid-update.

Cleanup: Less Is More

Deleted:

  • 60 LOC: _animate_button_shimmer() (CPU overhead, nobody noticed)
  • 3 modules: file_utils, net_utils, system_info (unused)
  • 1 component: ExpandableProcessList (replaced by widget pooling)
  • settings/ directory (obsolete configs)
  • System artifacts: _nul, nul (Windows junk)

Why cleanup matters:

  • Faster builds (less to compile)
  • Less cognitive overhead (fewer files to track)
  • Maintainability (obvious what's actually used)

The philosophy: Every line of code is a liability. If it's not pulling weight, delete it.


Lessons Learned (The Real Value)

1.6.8 taught me more than any previous update. Here's what stuck:

1. Scope Creep Can Be Good

Started with: Fix chart data persistence.

Ended with: New data engine + AI context + 70% performance + bug fixes + polish.

Lesson: Sometimes the "quick fix" reveals deeper problems. Don't fight the scope creep if it's making the product better. Just communicate the timeline honestly.

2. Constraints Force Innovation

Building on a 94°C laptop isn't ideal. But it forced me to optimize obsessively. Widget pooling, background threading, canvas reuse - these weren't optional. They were necessary.

Lesson: Constraints aren't obstacles. They're forcing functions for creativity. Unlimited resources = lazy solutions.

3. Context > Features

hck_GPT gained intelligence not from more features, but from using existing data contextually. Time-of-day + process patterns + user habits = proactive assistant.

Lesson: Before adding features, ask: "Am I using existing data optimally?" Context multiplies value without adding complexity.

4. Measure, Then Optimize

I guessed the main loop was slow. Profiling showed widget destruction was the real bottleneck. Guessing = wasted effort. Measuring = targeted fixes.

Lesson: Profile first. Optimize second. Never guess.

5. Stability Is a Feature

Try/except everywhere. Graceful degradation. Defensive coding. These aren't "nice to haves." They're the difference between a tool people trust vs one they uninstall after the first crash.

Lesson: System monitoring tools can't afford to crash. Build defense in depth.

6. Real Users = Real Priorities

The "show me yesterday's data" request came from a tester. Not from my roadmap. But it was the right priority.

Lesson: Ship early. Listen closely. Real feedback > imagined use cases.

7. Technical Debt Compounds

Temporary charts were a 2-hour shortcut to ship alpha. Fixing them properly took 3 weeks. The 2-hour shortcut cost 21 days later.

Lesson: Pay tech debt early. It only gets more expensive.


What's Next

1.6.8 is live for testers now. Two-week stability testing window. Collecting feedback. Handling bug reports (same-day turnaround).

Coming in future updates:

InsightsEngine v2:

  • Deeper pattern learning (monthly trends, not just weekly)
  • Predictive suggestions ("Battlefield usually crashes after 2h, save your game")
  • Anomaly alerts (SMS/email when system spikes while you're away)

Cross-platform:

  • macOS support (different syscalls, same architecture)
  • Linux support (easier than macOS, tbh)

Plugin system:

  • Community extensions
  • Custom metrics (track your own apps)
  • Share configs (gaming profile, work profile, etc.)

Mobile companion:

  • View stats remotely
  • Get alerts on phone
  • Start/stop monitoring

But first: Stability. Performance. Polish. The boring work that makes software reliable.


Try It Yourself

PC_Workman is open source. Free forever. MIT license.

GitHub: github.com/HuckleR2003/PC_Workman_HCK
It would be perfect, when you send me star!

Download: Alpha available on GitHub Releases + SourceForge

System requirements:

  • Windows 10/11 (macOS/Linux coming)
  • Python 3.9+ (if running from source)
  • .exe available (no Python needed)

Verified security:

  • VirusTotal scanned (70 engines, clean)
  • Sigstore signed (cryptographic proof of authenticity)
  • CodeQL analyzed (GitHub security scanning)

Want to contribute?

  • Bug reports: GitHub Issues
  • Feature requests: GitHub Discussions
  • Code: Pull requests welcome

Follow the build-in-public journey:

  • Twitter/X: Updates, screenshots, lessons
  • Medium: Long-form technical articles
  • Dev.to: Tutorials, deep-dives
  • Hashnode: Build-in-public series

The Real Story

1.6.8 wasn't planned. It emerged from fixing one thing, discovering another, and following the thread.

Started: "Fix charts in 2 days"

Ended: "Rebuilt data engine, added AI intelligence, gained 70% performance in 3 weeks"

Was it scope creep? Yes.
Was it worth it? Absolutely.

This is what building in public looks like. Not the polished "I had a vision" narrative. The messy "I found a bug, then another, then realized the whole system needed rethinking" reality.

PC_Workman is better because I didn't stick to the plan. I followed the problems to their root and fixed them properly.

Next update might be small. Might be huge. Don't know yet. That's the fun part.


Stack: Python 3.9, PyQt5, SQLite (WAL mode), psutil, Threading, Canvas rendering

Development time: 3 weeks (estimated 2 days, lol)
Lines added: ~1,500
Lines deleted: ~500
Laptop temperature during build: Still 94°C, still alive
Coffee consumed: Too much
Lessons learned: Infinite


Building PC_Workman on a dying laptop between warehouse shifts. Documenting everything. Shipping weekly. This is update 1.6.8.


About the Author
Marcin Firmuga | Solo Developer | HCK_Labs Founder

I built PC_Workman from scratch on dying hardware during warehouse shifts in the Netherlands.

Before this:

Game translations for Polish communities
IT technician internships (2 months, 2022)
Warehouse operations (Poland, Netherlands, multiple contracts)
Twelve failed projects (all quit at 70–85%)
But this one stuck.

680+ hours of code. 4 complete UI rebuilds. 16,000 lines deleted. 3 AM sessions. Energy drinks and toast #7.

And finally: an app I wouldn’t close in 5 seconds.
That’s the difference between building and shipping.

PC_Workman is the result.
Currently: Searching for first tech role while building in public.

Warehouse to Developer | Building Despite Everything | HCK_Labs

If you want to be first person, who support me:
IM HERE <3 - CLICK

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.