DEV Community

Cover image for Every Swift Call Cost Me 50ms. I Killed That Cost Entirely With a Resident Daemon. [Devlog #7]
hiyoyo
hiyoyo

Posted on

Every Swift Call Cost Me 50ms. I Killed That Cost Entirely With a Resident Daemon. [Devlog #7]

All tests run on an 8-year-old MacBook Air.

100 thumbnails. Each one spawning a new Swift process. 50ms per spawn.

That's 5 seconds of pure process-creation overhead before a single pixel renders.

The fix: stop spawning. Keep one process alive and pipe commands into it.


The problem in code

Every call to macOS native APIs went through a Swift CLI binary:

// 50ms overhead on every single call
let output = Command::new("/path/to/swift-util")
    .arg("--task")
    .arg("render-page")
    .arg(page_num.to_string())
    .output()?;
Enter fullscreen mode Exit fullscreen mode

One call: fine. A hundred calls: the UI is dead for 5 seconds.


Ghost Engine: spawn once, pipe forever

Old:
Rust → spawn (50ms) → process → exit → spawn (50ms) → ...

Ghost Engine:
App start: Rust → spawn daemon (once)
After:     Rust → IPC pipe → daemon → response (<1ms)
Enter fullscreen mode Exit fullscreen mode

One spawn at startup. Zero after that.


GhostEngineManager

use std::process::{Child, Command, Stdio};
use std::io::{Write, BufRead, BufReader};
use std::sync::{Arc, Mutex};

pub struct GhostEngineManager {
    child: Arc>,
}

impl GhostEngineManager {
    pub fn spawn() -> Result {
        let child = Command::new("/path/to/ghost-engine-daemon")
            .stdin(Stdio::piped())
            .stdout(Stdio::piped())
            .spawn()?;

        Ok(Self {
            child: Arc::new(Mutex::new(child)),
        })
    }

    pub fn send_command(&self, cmd: &str) -> Result {
        let mut child = self.child.lock().map_err(|e| e.to_string())?;

        if let Some(stdin) = child.stdin.as_mut() {
            writeln!(stdin, "{}", cmd).map_err(|e| e.to_string())?;
        }

        if let Some(stdout) = child.stdout.as_mut() {
            let mut reader = BufReader::new(stdout);
            let mut response = String::new();
            reader.read_line(&mut response).map_err(|e| e.to_string())?;
            return Ok(response.trim().to_string());
        }

        Err("daemon communication failed".to_string())
    }
}
Enter fullscreen mode Exit fullscreen mode

Registered as a Tauri singleton:

fn main() {
    let ghost_engine = GhostEngineManager::spawn()
        .expect("failed to start Ghost Engine");

    tauri::Builder::default()
        .manage(ghost_engine)
        .invoke_handler(tauri::generate_handler![render_page, generate_thumbnail])
        .run(tauri::generate_context!())
        .expect("error while running tauri application");
}
Enter fullscreen mode Exit fullscreen mode

Intelligent Prefetch becomes practical

With no spawn overhead, aggressive prefetching is now viable:

const handleScroll = (e: React.UIEvent) => {
  const velocity = Math.abs(scrollTop - prevScrollTop.current);

  // Fast scroll: prefetch 12 pages ahead
  const prefetchCount = velocity > 500 ? 12 : 3;
  const prefetchTargets = getNextPages(currentPage, prefetchCount);

  prefetchTargets.forEach(page => {
    // Daemon already alive — reaches it in <1ms
    invoke('render_page', { page });
  });

  prevScrollTop.current = scrollTop;
};
Enter fullscreen mode Exit fullscreen mode

Pages are ready before the user's eyes reach them.


Results

Metric Before Ghost Engine
Per-call overhead ~50ms <1ms
100 thumbnails ~5s ~0.1s
Scroll jank Visible Gone

Next devlog

Publisher — building a booklet imposition engine so home printers can produce proper saddle-stitched booklets. Surprisingly tricky page ordering math involved.


Hiyoko PDF Vault → https://hiyokoko.gumroad.com/l/HiyokoPDFVault
X → @hiyoyok

Top comments (0)