DEV Community

Alin Pisica
Alin Pisica

Posted on

Acta Non Verba: Building a GTD Task Manager by Conversation with GitHub Copilot

GitHub Copilot CLI Challenge Submission

This is a submission for the GitHub Copilot CLI Challenge

What I Built

A while ago I read David Allen's Getting Things Done it and it kind of changed the way I do things.
Seeing this challenge I felt that it is a perfect opportunity of combining the pleasure with work, and I created Acta.

Acta is a command-line task manager built around the Getting Things Done (GTD) methodology combined with my current needs (I use an external tool at the moment for tracking tasks, but I end up finding myself pretty often in the need of adding a new task right from the terminal, while I work - it removes friction as much as possible, and keeping my in my flow). The name comes from the Latin phrase "Acta non verba" - actions, not words - which felt fitting for a tool designed to get things off your mind and into motion.

The workflow mirrors the GTD capture-process-do loop:


CAPTURE → PROCESS → DO
   ↓         ↓       ↓
Inbox    Actionable  Work
Enter fullscreen mode Exit fullscreen mode
  1. Capture - Dump a task into the inbox instantly, with optional @context tags (e.g., @project, @article, @research)
  2. Process - Review the inbox: assign a due date and priority, or mark as done/remove
  3. Do - Work through tasks in priority order: overdue first, then today's, then upcoming
  4. List - Filter tasks by context, keyword, regex, or field (e.g., priority:high)
  5. Stats - Analytics dashboard: completion rate, context breakdown, overdue count, priority distribution

The app is built with .NET 10, Spectre.Console for the terminal UI, and compiled as a NativeAOT binary - a single, fast executable with no runtime dependency, cross-compiled for Linux, Windows, and macOS.

Didn't write a console app since university, and for a long time I wanted to try out Spectre.Console touches crept in. The interactive menu greets you with a FigletText banner reading "Acta, non verba" and I tried to integrate colors as much as I can, without making it disturbing.

What makes this project meaningful to me is that I didn't write most of the code. I guided it out of GitHub Copilot, one natural-language prompt at a time. It was a fun time and a perfect example of Copilot CLI's functionalities: it created a tool that I needed, based on my needs and requirements, with as little input as possible from my side.


Demo

Project Repository: Acta - GitHub

Uploaded builds for specific distros under the releases/ folder.

The GTD loop in action

# Capture tasks anywhere, any time
acta capture "Renew car insurance @admin"
acta capture "Review PR #42 @work"
acta capture "Buy coffee beans @shopping"

# Process the inbox (assign dates, priorities, remove noise)
acta process

# Work through today's tasks
acta do

# Focus on a specific context
acta do @work

# Search and filter
acta list "@work priority:high"
acta list "/renew|review/"      # regex

# Analytics at a glance
acta stats
Enter fullscreen mode Exit fullscreen mode

Interactive mode

Run acta with no arguments to get the full interactive menu:

  ___        _
 / _ \      | |
/ /_\ \ ___| |_ __ _   _ __   ___  _ __    __   _____ _ __| |__   __ _
|  _  |/ __| __/ _` | | '_ \ / _ \| '_ \   \ \ / / _ \ '__| '_ \ / _` |
| | | | (__| || (_| |_| | | | (_) | | | |   \ V /  __/ |  | |_) | (_| |
\_| |_/\___|\__\__,_(_)_| |_|\___/|_| |_|    \_/ \___|_|  |_.__/ \__,_|

── Productivity for the weak-willed ──────────────────────────────────────

What do you feel like doing?
> Capture
  Process
  List
  Do
  Stats
  Help
  Exit
Enter fullscreen mode Exit fullscreen mode

What the task list looks like

 #   Pri  Title                          Context        Due         Status
 ─   ───  ─────────────────────────────  ─────────────  ──────────  ──────
 1   ❗❗ Review PR #42                   @work         2026-02-15  →
 2   ❗   Write article for Dev.To        @article      2026-02-20  →
 3   ·    Upload content on Git           @dev          -           ○ ⏸2h
Enter fullscreen mode Exit fullscreen mode

(Add a screenshot or recording before publishing)


My Experience with GitHub Copilot CLI

What was fun? Well, everything.

I felt like I was the project manager, product owner and architect, while Copilot acted like an army of developers. I wrote the requirements in plain English; Copilot wrote the code, spotted its own bugs, refactored its own architecture, and generated its own documentation. What follows is the story of how that actually played out across three phases.

Phase 1 - Building from scratch, prompt by prompt

I started with a simple instruction:

"Create a new .NET console project named Acta, add Spectre.Console package, and enable NativeAOT in the csproj file. The project will be a CLI-based tasks management based on 'Getting Things Done' idea."

Copilot scaffolded the project correctly. After, I directed him in how the model should look, how data should be stored and how it should process the data. It went smoothly, with minimal corrections. I did not ask for his decision making, I made the calls, and only directed towards him as simply as possible, giving it plenty of room to juggle. I prompted it everythime to ask for clarification questions, if needed.

From there I fed it one feature at a time. Each prompt described the behavior, not the implementation:

  • "Create a StorageService that reads and writes a list of TaskItem to a JSON file in the local AppData folder using System.Text.Json source generation."
  • "Populate the Capture choice. The user should be asked for the task's input text. He can provide context by specifying elements with '@' inside the task."
  • "Allow the user to also directly capture tasks using acta capture "Insert task text here" directly from CLI, without running the executable."
  • "Create the 'list' action. It should list elements based on filters or contexts. @home shows home tasks, make bed @home shows tasks matching 'make bed' with context @home. Multiple contexts, multiple words, all lowercase."

The StorageService prompt is worth highlighting. Copilot's first attempt failed to compile. More interesting: after the build error, it didn't just fix the error - it scanned the rest of the code for null-safety warnings it hadn't been asked to fix, and resolved them proactively. That was the moment I realized this wasn't just autocomplete.

By the end of Phase 1 the app had:

  • capture (inline and interactive)
  • process (inbox review with date assignment)
  • list (context and keyword filtering)
  • (task queue with overdue → today → upcoming priority order).

Everything was working and I had my "mini-tool" ready to go into my dev enviroment and day-to-day tasks.

Phase 2 - Copilot refactors itself

As expected, from the initial phase, everything was in one place: commands, display logic, filtering, menu handling. I asked Copilot to look at it and generate summaries for everything that was existing.
I identified the problems (tight coupling, mixed concerns, poor testability) based on his explanations (not leaving room for interpretation), proposed a structure (Commands / Services / Models / Utilities), and listed him the file structure I wanted - along with what would stay the same.

Then it did the refactoring:

  • Program.cs shrank from 354 lines to 54
  • 9 new source files were created across 4 directories
  • TaskFilterService, TaskDisplayService, CommandDispatcher, and ContextParser emerged as focused, single-responsibility classes
  • Every existing command worked identically after the change - zero breaking changes

Phase 3 - Copilot plans its own roadmap, then implements it

After the refactoring, went on the classical path of a software product: improve it. I asked him to produce a nextsteps.md: a prioritized list of missing features and UI improvements, each with impact assessment, effort estimate, and a specific list of files to modify. It even laid out a 5-phase implementation roadmap. I did not suggest anything, I wanted to test its limitations and see how far it could go. It amazed me. Did it create something revolutionary, that did not exist until now? No, of course, we all know how it works (right? right? it can't invent new things... YET). Did it create a list of features that improved the product, and it reduced my time of defining and describing the features? Yes.

I picked some of them and asked for implementations (I updated the .md file wiht my own changes).

  • Advanced search: regex syntax (/pattern/), fuzzy matching (omlComplete), field-specific filters (priority:high, context:@work), exclusion with -
  • Task priority levels: Critical / High / Medium / Low / None, with visual indicators and color coding throughout the UI
  • Snooze: defer tasks by 1h/2h/4h/tomorrow/next week/custom, with automatic filtering and a ⏸ indicator showing remaining time
  • Analytics dashboard: completion rate, context distribution, priority breakdown, overdue counts - displayed as a formatted multi-table panel via Spectre.Console

In the same pass it added colored status indicators, a task detail view, keyboard shortcut help, and context autocomplete infrastructure. 5 new services, 1 new command, 11 modified files - all building cleanly with zero warnings. In the meantime, I made a coffee.

What I actually learned

The prompt is the design doc. The quality of what Copilot produced was directly proportional to how precisely I described the user experience I wanted. Vague prompts produced vague code. "Allow me to filter by context" gave me something workable; "I can use make bed @home and it should show all elements containing 'make bed' with the @home context, searched in any word order, case-insensitive" gave me exactly what I needed.

It catches its own mistakes - if you give it feedback. The failed StorageService build was a normal failure mode. What wasn't normal was watching Copilot read the build output, identify the root cause, fix it, and then scan for related issues it hadn't been asked about.

The developer becomes the curator. My job shifted from writing code to knowing what to ask for, recognizing when the output was right, and deciding when to push back. The code I guided into existence is better-structured and more thoroughly documented than most code I write from scratch, simply because Copilot's defaults lean toward clean architecture. All code was reviewed and checked by me. I do not think that any code written by or with an AI tool should end up in a production-ready environment without the manual review of a developer.

Fun fact: Copilot helped me review this, rephrase some weirdly mentioned items and corrected some grammar mistakes. It does more than coding, if used correctly. It does more than coding, if used under supervision. It can be an amazing tool, but no hammer has ever replaced the blacksmith.


Top comments (0)