Compared to previous commands, copy was relatively straightforward to build. The options — recursive, force, dry-run, verbose, skip-existing, no-overwrite, pattern filtering — all came together without major friction. Testing was clean. No dramatic bugs this time.
The real challenge was the threading model.
The original implementation had a fundamental flaw: threads would exit their loop as soon as the work queue was empty, even if other threads were still discovering new directories to process. In practice, almost everything was running on a single thread. The fix required rethinking the entire approach — instead of threads dying prematurely, they now wait intelligently using condition variables and notifications. They only exit when there's genuinely nothing left to do. A whole new layer of complexity, but the right kind.
The bigger unsolved problem is performance on large directories. Unlike list or inspect, copy can't cache anything — every file operation requires constant syscalls, and syscalls are expensive. The more files, the more it hurts. I have some ideas for specific cases, but I'm not interested in point solutions. I want something more fundamental before I consider the problem addressed.
--preserve is intentionally not implemented yet. Preserving metadata means more syscalls, and adding that on top of an already expensive operation didn't make sense until I find a way to reduce that cost first.
Kron is still being built from the void.
github.com/TheNobelVoid/kron
Top comments (0)