DEV Community

Cover image for Driving AI CLI Tools to Write Code: A Semi-Automated Workflow
Kingson Wu
Kingson Wu

Posted on

Driving AI CLI Tools to Write Code: A Semi-Automated Workflow

Lately, I’ve been experimenting with a semi-automated programming workflow.

The idea is simple: let AI tools continuously write code in a controlled environment, while I stay in charge of architecture, quality, and reviews. Think of it as engineering field notes — practical patterns and lessons learned.


Why Semi-Automation?

We already have plenty of AI coding tools — Claude Code, Gemini CLI, QWEN, and many others that integrate with CLI workflows. They boost productivity, but manual prompting step by step isn’t enough.

Instead, my approach is to:

  • Use scripts to orchestrate and manage AI tools;
  • Keep sessions alive with tmux;
  • Automatically send structured prompts, collect responses, and keep the AI working until a task is done.

The goal: a tireless “virtual developer” coding 24/7, while I focus on design, architecture, and quality control.


The Overall Approach

This workflow has four main stages, each anchored by human review. That’s the secret sauce for keeping things sane.


1. Project Initialization: Specs and Skeleton First

Before coding, you need solid guidelines and structure. That’s what makes semi-automation possible.

  • Create a new GitHub repository.
  • Start with a baseline project doc (e.g., cpp-linux-playground), then rewrite it for your tech stack (e.g., TypeScript) and save as PROJECT.md.
  • Plan ahead:
    • Tech stack (languages, tools, standards)
    • Task verification (tests, QA)
    • Static analysis & code quality tools
    • Project structure
    • Git commit conventions

👉 Pro tip: rename docs/ to something more precise (like specifications/) to avoid random file dumping.

AI can help draft this documentation, but every detail should be human-approved.


2. Break Tasks Into Detailed Specs

Every feature or bug fix deserves its own spec under @specifications/task_specs/.

  • No coding yet — just detailed planning.
  • Each spec should define:
    • Functional description
    • Implementation steps
    • Inputs and outputs
    • Test cases
    • Edge cases and risks

This reduces ambiguity and dramatically improves AI’s code quality.


3. Automate the Coding Process

With specs in hand, the real semi-automation begins:

  • Use Python scripts to orchestrate AI CLI sessions.
  • Keep sessions running via tmux.
  • Send structured prompts to AI tools (Claude, Gemini, QWEN, etc.).
  • Enforce these rules:
    • Never auto-commit code
    • Run validation after every iteration
    • Sync project progress into TODO.md, linked from PROJECT.md

Workflows can borrow from ForgeFlow, which demonstrates prompt pipelines and programmatic handling of AI responses.

👉 Pro tip: If a task runs for more than an hour, send an “ESC” signal to re-check progress.


4. Clear Definition of “Done”

A task is done only when:

  • All code matches the plan;
  • Unit tests pass;
  • Automation scripts and prompts are updated;
  • Build and test pipelines run cleanly;
  • Git changes are committed;
  • The next task can begin.

At the very end, the AI should respond with nothing but “Done.”


Project Example: ts-playground

  • ts-playground
    This project serves as:

  • A structured playground for mastering TypeScript;

  • A CI/CD-enabled environment;

  • A practical use case of AI-assisted, semi-automated programming.


Semi-Automation vs. Full Automation

This workflow is semi-automated, not fully automated — intentionally:

  • Specs and architecture still need human input.
  • Prompts and scripts are evolving — you won’t cover every case at first.
  • Code quality checks remain essential — AI output isn’t always stable.

Semi-automation is cheap, reusable, and controlled. Full automation would require multi-agent systems and heavy context management — overkill for now.


Why Context Management Matters

The AI stays productive only if the project context is well-structured:

  • Organize guidelines by category and directory;
  • Keep task specs structured for easy reference;
  • Feed the AI only the relevant context per task.

This way, the AI acts like a real assistant instead of just a fancy autocomplete.


A Bit of Philosophy

This workflow reframes roles:

  • AI = the “coder + assistant,” executing granular tasks.
  • You = the “tech lead,” designing systems, reviewing work, and managing quality.

AI doesn’t replace developers. Instead, it amplifies us — pushing humans toward higher-level thinking, decision-making, and problem-solving.


TL;DR

Semi-automated programming in plain English:

  1. Set up a strong project skeleton and docs.
  2. Break work into reviewable, detailed specs.
  3. Automate execution with Python scripts, tmux, and AI CLIs.
  4. Define “done” clearly and iterate.

It’s a practical, low-cost way to experiment with AI-driven coding — perfect for solo developers or small teams who want speed without losing control.

Top comments (0)