DEV Community

Ted Enjtorian
Ted Enjtorian

Posted on

[TOK-02] What is TOCA? The Core Loop of Task-Oriented Cognitive Architecture

Defining tasks is just the beginning. Making tasks continuously operate and evolve within a cognitive system — that is the real breakthrough.

If You Only Have Definitions, Tasks Are Just Documents

In the previous article, we discussed TOK (Task Ontology Kernel) — defining what a task "is." But definitions alone aren't enough.

Just like defining an elegant Class without a Runtime to execute it — it's just a document.

Tasks are the same. After being defined, we need to answer a more fundamental question:

How do tasks operate within a cognitive system?

This is exactly what TOCA (Task-Oriented Cognitive Architecture) is designed to solve.

Why Traditional Models Fall Short

The computational model we're used to looks like this:

Input → Process → Output
Enter fullscreen mode Exit fullscreen mode

Simple and intuitive. But it has a fatal flaw: no evolution.

Every execution is independent. The experience from the previous run cannot automatically feed back into the next one. You have to manually adjust Prompts, modify code, and reconfigure tools.

In an AI-native environment, what we need is not a straight line, but a closed loop.

TOCA: The Five-Step Core Loop

TOCA breaks down task operation into five steps, forming a continuously evolving closed loop:

Capture (Capture Intent)
    ↓
Dispatch (Dispatch Task)
    ↓
Execute (Execute Task)
    ↓
Validate (Validate Results)
    ↓
Evolve (Evolve Strategy)
    ↓
Dispatch (Next round...)
Enter fullscreen mode Exit fullscreen mode

Let's break them down one by one.

TOCA Loop

Step 1: Capture — Capturing Intent

Structuring a vague idea into a Task Object.

Human intentions are often ambiguous: "Analyze the performance for me," "This module needs refactoring."

Capture's job is to transform these vague intentions into structured Task Objects — including explicit Intent, Context, Strategy, and Evaluation.

This is the translation layer between humans and the system. In the POG ecosystem, this can be accomplished through conversation, a VS Code Plugin, or by directly writing YAML.

Step 2: Dispatch — Dispatching Tasks

Assigning tasks to the most suitable executor.

Not all tasks are suitable for LLMs. Some require human judgment; others need specific toolchains.

Dispatch's responsibility is to select the most appropriate execution unit based on the nature of the task:

  • LLM Agent: Suitable for analysis, writing, planning, reasoning
  • Toolchain: Suitable for compilation, deployment, testing
  • Human: Suitable for creative decisions, final review, ethical judgment

In TOK's YAML definition, this corresponds to the execution.agent setting.

Step 3: Execute — Executing Tasks

The Agent executes within Context boundaries, producing results and recording a complete trace.

This is the step where things actually get done. The key point is:

The Agent doesn't simply execute scripts. The Agent autonomously decides how to execute.

Task → Agent reads Task Object
     → Agent decides execution strategy
     → Agent uses tools (Shell, API, LLM reasoning)
     → Agent produces results
     → Agent records complete execution trace
Enter fullscreen mode Exit fullscreen mode

The fundamental difference from traditional automation is: the Agent can dynamically select tools, adjust strategies, and even create new subtasks.

In POG Task, execution traces are recorded in record.md — not as Logs, but as a reasoning process that can be reviewed by humans.

Step 4: Validate — Validating Results

Determining whether the Intent has been achieved based on the Evaluation layer.

Producing results alone isn't enough. TOCA requires that every execution must pass validation:

  • Automated tests: Unit tests, integration tests
  • Semantic Alignment Check: Does the result truly align with the original Intent?
  • Human feedback: Final judgment by humans when necessary

Validation failure doesn't mean failure — it means evolution is needed.

Step 5: Evolve — Evolving Strategy

Feeding execution experience back into the Ontology to optimize the next Strategy.

This is TOCA's most powerful step, and the part completely missing from traditional models.

After execution, the system doesn't simply "start over." Instead, it writes this experience into the Strategy layer, making the next execution automatically better:

  • "Python script was too slow last time; switch to direct SQL queries next time"
  • "Three steps last time, but they can actually be merged into two"
  • "Validation criteria were too loose last time; need to add integration tests"

Tasks are not just executed — they evolve.

Why "Cognitive Architecture" Instead of "Workflow Engine"?

You might ask: how is this different from Airflow or Temporal?

The difference is fundamental:

Workflow Engine TOCA
Core Concept DAG / Nodes Task Object
Executor Fixed scripts Autonomous Agent
Evolution Capability ❌ None ✅ Automatic strategy evolution
State Pipeline state Cognitive state (persistent and evolvable)

A workflow engine is an Automation Pipeline.

TOCA is Cognition Infrastructure.

A workflow engine lets you run through pre-written steps once. TOCA lets tasks learn how to run better on their own.

A Complete Example

Suppose you have a task: "Analyze API performance weekly and generate a report."

Week 1 (Capture + Execute):

  • You define a Task Object, with the Strategy set to "Download logs → Python analysis → Generate markdown report"
  • The Agent executes and produces the report. Validation passes.

Week 2 (Evolve + Execute):

  • The Evolve step from last time discovered: Python analysis took 15 minutes, but could be done directly with SQL queries
  • Strategy auto-updates: "SQL query → Generate markdown report"
  • Execution time drops from 15 minutes to 30 seconds

Week 3 (Evolve + Execute):

  • Evolve discovers the report format can incorporate charts
  • Strategy adds a new tool: "SQL query → matplotlib charts → markdown report"

The task is continuously evolving. No manual human adjustment needed.

The True Significance of TOCA

TOCA isn't about "automating some steps."

It's about making thinking itself something that can be saved, reused, and continuously evolved between humans and AI.

In the past, cognition was brain-bound.
Now, cognition can be task-bound.

TOCA is a cognitive architecture with tasks as its core persistent unit, enabling humans and AI to collaboratively execute, evolve, and reuse structured cognitive processes.

Conclusion: Tasks Are Not Just Executed — They Evolve

Defining tasks (TOK) is only the first step. Making tasks continuously operate, learn, and grow stronger within a cognitive system (TOCA) — that is the real watershed moment from the tool era to the AI-native era.

In the next article, we'll look back at how this entire journey unfolded — from POG's Prompt governance, to POG Task's task execution, to TOK's ontological core.

👉 Next: From POG to TOK: A Natural Evolution Path


Full content: https://enjtorian.github.io/task-ontology-kernel

Top comments (0)