The Decomposition Problem: Why Breaking Tasks into Agent-Sized Pieces Is Harder Than It Looks
Every operator who has worked with autonomous agents has experienced this: you carefully decompose a complex task into clean, discrete subtasks, hand them to an agent, and watch it reconstruct them into something that doesn't resemble your original intent. The decomposition looked logical on your whiteboard. The execution looked logical from the agent's perspective. But the output is wrong in ways that are hard to diagnose.
The problem isn't the agent's capability. It's the decomposition itself.
Why Human Decomposition Fails
Human beings decompose tasks based on linear causality. We draw diagrams where A leads to B leads to C, and each step has a clear input-output relationship. This works perfectly for physical tasks and well-defined software workflows.
But agent tasks rarely have clean linearity. They have loops, feedback cycles, and implicit context that humans absorb unconsciously but agents must reconstruct explicitly.
Consider: you want an agent to "research competitor pricing and draft a pricing strategy memo." You break this into steps: (1) gather competitor prices, (2) analyze pricing patterns, (3) draft recommendations. It sounds reasonable. But step 3 requires knowledge that isn't in step 2's output—things like your product's positioning, your sales team's discount patterns, your enterprise customers' willingness to pay. The agent doesn't know to pull this context unless you tell it to.
This is the decomposition problem: humans decompose tasks based on how tasks feel sequential. Agents decompose based on what's actually in each data payload.
The Atomic Unit Fallacy
The instinct when things go wrong is to decompose further. Make the tasks smaller. More discrete. More atomic. This usually makes things worse.
When you break a task into units that are too small, you lose the coherence that makes the task tractable. A research subtask that says "find competitor pricing" is executable but lacks the guiding context of "find competitor pricing so we can identify underpriced segments." Without that context, the agent optimizes for the wrong objective. It returns comprehensive pricing data instead of actionable pricing insights.
The agent's cost function is implicit in how you phrase the task. Atomic tasks strip away the cost function.
The Thick Slice Principle
The better framing is thick slices rather than atomic units.
A thick slice contains:
- The objective — what decision this work informs
- The context — what background knowledge the agent needs
- The constraints — what success looks like, including what to avoid
- The output format — how the agent should structure its response
A thin slice contains only the action: "find competitor pricing." A thick slice contains: "find competitor pricing for our top 5 rivals in the SMB segment, focusing on entry-level tiers and bundling patterns. I need this to inform a pricing decision next week. Return a structured comparison with per-feature pricing breakdown, not just list prices."
Thick slices are more work upfront. They require you to think through what you actually need, not just what feels like the logical first step. But they dramatically reduce the reconstruction cost on the back end.
Failure Modes When Decomposition Goes Wrong
The most common failure mode isn't task failure—it's tangential success. The agent completes the decomposed subtasks with high fidelity, but the completion is irrelevant to the original goal. The research was thorough. The analysis was sound. The recommendations were confidently wrong for your specific market.
This happens because decomposed subtasks get their own optimization targets. Each subtask becomes "do this subtask well" rather than "move toward the original goal." The agent loses sight of the forest for the trees, not because it's stupid, but because you inadvertently made each tree a separate objective.
Another failure mode is context fragmentation. When tasks are broken into disconnected units, each unit loses the surrounding context. The agent working on step 7 doesn't know what step 3 found, unless you explicitly wire that information flow. In human teams, this happens naturally through shared context and whiteboards. In agent systems, you have to build it explicitly.
The Decomposition Review
Before sending work to an agent, run a decomposition review. For each subtask, ask:
- Does this subtask have an explicit connection to the final goal, or only an implicit one?
- Is there context this subtask needs that lives in other subtasks?
- What would this subtask's output look like if it were perfectly executed but irrelevant to the goal?
- What information does the next subtask need from this one that isn't currently specified?
If you find gaps, thicken the slice. Add context. Add constraints. Add output specifications.
The goal isn't to remove the need for agent judgment—it's to give the agent the context it needs to exercise judgment correctly.
Breaking tasks into agent-sized pieces isn't a sizing exercise. It's a reasoning exercise. And most of us are doing it backwards.
Top comments (0)