DEV Community

Cover image for Building Team A: An AI System for Turning Volunteer Chaos into Structured Engineering Work
Michael Nikitochkin
Michael Nikitochkin

Posted on

Building Team A: An AI System for Turning Volunteer Chaos into Structured Engineering Work

Volunteer Systems and Why They Break Differently

Lately, I’ve been spending part of my free time helping a volunteer organization that coordinates people with 3D printers to support manufacturing efforts for those in need.

Very quickly, one thing became obvious: volunteer-driven projects behave nothing like traditional software organizations.

There are no permanent engineering roles, no dedicated product teams, no formal onboarding, and no guarantee of continuity. Everything depends on people contributing spare time and motivation.

In some ways, this resembles the "startup ideal": fast execution, direct communication, minimal process, and visible impact. The difference is that startups eventually stabilize. Volunteer systems often do not.

Contributors constantly rotate in and out. Priorities shift without warning. Documentation is often incomplete, not because it is ignored, but because the people who understand the system are busy keeping it running.

Most of these projects are also not truly open source in practice. That creates an additional friction point: new contributors arrive without context, without architecture knowledge, and often without anyone available to guide them.

Over time, a pattern becomes clear:

A significant amount of volunteer energy is lost not due to lack of motivation, but because onboarding and context acquisition are too expensive.

Team A: Reducing the Cost of Contribution

To address this, I started building a project called Team A.

The goal is simple: reduce operational chaos and make it easier for volunteers who only have a few hours per week to contribute effectively.

From informal messages to structured tasks

One of the first problems Team A tackles is task creation.

A non-technical contributor can describe an issue in a few sentences — similar to a support request. The system then transforms this into a structured engineering task that includes:

  • clearer problem description
  • additional context and assumptions
  • improved requirements
  • suggested next steps

The user can then review and edit the generated issue before it moves forward.

Context enrichment

The next stage enriches the task further by identifying:

  • relevant parts of the codebase
  • related services or dependencies
  • potential implementation areas

This reduces the need for repeated clarification cycles and significantly lowers the cognitive overhead for new contributors.

For small or well-scoped tasks, contributors can also request partial implementation assistance directly through the system.

Design

Technically, the system evolved into something more complex than initially expected.

Team A runs as a distributed, container-based orchestration layer where multiple AI roles collaborate in a workflow resembling a virtual engineering team. In the ideal model, a “team lead” component assigns tasks to specialized roles responsible for research, planning, implementation, and documentation.

In the current MVP, this orchestration is intentionally simplified. The user acts as the coordinator and explicitly selects which agent to run for each step. Task management is handled through GitHub Issues, with support for additional vendors planned after the initial release.

Agent assignment is currently managed via labels. Future iterations may introduce more expressive constructs, such as lightweight agent profiles or avatars. However, this would require a more advanced project management layer, which is explicitly out of scope for now.

The design principle here is deliberately conservative: keep the system usable and understandable before introducing additional abstraction.

Multi-Client LLM Integration

Initially, the system was designed around a single native LLM-based solution. This approach quickly proved insufficient in practice.

The core limitation was not model capability, but everything surrounding it: orchestration, context management, tool integration, and operational reliability.

Several practical constraints emerged:

  • limited context windows and token budgets
  • coordination overhead between multiple agents
  • long-running workflow reliability
  • cost vs. quality trade-offs in context selection
  • fragmented and incomplete project knowledge
  • risk of task drift away from original intent

Different workflows required different strengths, and no single approach covered all cases well enough for production use.

To address this, the system evolved into a multi-client LLM integration layer. It currently supports:

  • OpenCode client — for structured code generation and more deterministic execution flows https://opencode.ai/
  • Claude Code client — for flexible reasoning and natural language task decomposition https://claude.ai

This separation allows the system to balance predictability and creative reasoning depending on the task, instead of forcing all workloads through a single abstraction layer.

In practice, this hybrid approach has been more stable, easier to extend, and more adaptable as new requirements appear.

An additional benefit is deployment flexibility: agents can run on distributed machines, allowing contributors to execute workflows using their own infrastructure and tokens. This significantly lowers the barrier to participation while distributing compute and operational cost.

A key design goal going forward is to fully support "run-on-your-own-machine" execution as a first-class model.

Setup and Environment Management

One of the remaining challenges is onboarding new machines and environments.

The goal is to make setup closer to modern CI/CD workflows (e.g. Woodpecker, CircleCI, GitHub Actions), where environments can be provisioned, executed, and torn down in a controlled and repeatable way.

Ideally, contributors should be able to:

  • spin up a local or remote containerized dev environment
  • run assigned tasks in isolation
  • execute agent workflows
  • tear down the environment cleanly after completion

The system currently depends on a set of predefined tools and integrations, including:

  • GitHub access and issue management
  • OpenCode / Claude Code tokens and clients
  • local repository setup and git workflows
  • environment configuration for running agents

This part is still evolving, but the long-term direction is clear: enable multi-repository workflows that can be initialized from scratch inside ephemeral, reproducible environments.

In other words, the goal is to move toward a model where complex contribution work can be executed safely in a controlled virtual workspace without requiring deep local setup knowledge.

Closing Thoughts

The project is still evolving, but it already improves collaboration in meaningful ways.

It reduces onboarding time, lowers communication overhead, and enables more async contribution without requiring perfect upfront specifications.

More broadly, volunteer infrastructure feels like an underexplored engineering problem.

Building systems without stable teams, fixed ownership, or guaranteed continuity forces a different set of constraints: simplicity becomes critical, knowledge transfer becomes a core feature, and adaptability matters more than optimization for scale.

That constraint space turns out to be surprisingly interesting to work in.

Top comments (0)