Agentic coding here means treating the LLM as part of a system: you supply constraints (rules), reusable procedures (skills), and split responsibilities (subagents), then merge and review (reflection) instead of hoping one giant prompt does it all. This repository is a small, concrete rehearsal of that pattern.
The runnable piece is a User Analytics dashboard—enough UI and mock data to force real decisions about structure and state. The artifact matters less than the orchestration layer you see below: the same ideas written as code-shaped docs you can skim like a technical post, not a pile of file links.
The thesis (one glance)
┌─────────────────────────────────────────────────────────────┐
│ Runnable app ≈ ~25% of what you optimize for │
│ Agentic layer (rules, skills, subagents, reflection) ≈ rest │
│ │
│ Goal: repeatable, inspectable workflows—not one-off prompts │
└─────────────────────────────────────────────────────────────┘
Rules: project manifest
Rules are non-negotiables: what kind of product this is allowed to be. Think of this block as the spirit of AGENTS.md—the constitution in miniature.
# agents.manifest — frontend-only dashboard sandbox
constraints:
backend: none # never pretend a real service exists
database: none
data_source: simulated # APIs are mocked; responses are structured
principles:
- Always simulate API responses (shape, timing, errors)
- Use mock data generators with internal consistency
- State and metrics must feel like one snapshot, not random literals
- UX realism > technical pedantry when they disagree
deliverables_expectation:
- Clean React components, modular layout
- Reusable hooks for async-shaped flows
- Dummy data that could fool a demo audience briefly
workflow_outline:
- Understand the feature
- Split: UI + state + data (+ polish)
- Delegate to roles (see subagents)
- Validate consistency
- Reflect and refine UX
Skills: when to reach for which playbook
Skills are not philosophy; they are repeatable procedures for recurring jobs. Each entry is short on purpose—it is muscle memory, not a tutorial.
# .agents/skills — local playbooks
component-generation:
when: building or extending UI pieces
good_finish:
- Clear purpose and props contract
- Structure before ornament
- Styles and patterns reusable across the dashboard
dashboard-design:
when: composing a screen, not a single widget
good_finish:
- Cards, KPIs, chart, clear hierarchy
- Layout breathes; density feels intentional
mock-data:
when: anything the UI will render as “facts”
good_finish:
- IDs, timestamps, plausible variation
- No lazy filler that breaks suspension of disbelief
state-patterns:
when: async UX (even when nothing hits the network)
good_finish:
- loading → success → error paths read as one story
- Retry and optimism where they make sense for the scenario
Subagents: who owns what
Splitting work is a feature. The orchestrator holds intent and merges; specialists own slices. Reflection is a mandatory last pass—not a bonus round.
ORCHESTRATOR (.agents/orchestrator.md)
──────────────────────────────────────
• Parse intent
• Decompose into sub-tasks
• Assign → merge
• Guard cross-cutting consistency
DELEGATION MAP (orchestrator → specialist)
──────────────────────────────────────────
UI work → UI Builder
State / async → State Manager
Data / API → Mock API Generator
Polish → UX Enhancer
# specialists — one job each
ui_builder:
focus: React surfaces, hooks, composition
avoid: hardcoded data where props or hooks should feed the tree
state_manager:
focus: loading, errors, fake delays, optimistic updates
goal: async that feels real without a wire
mock_api:
focus: JSON shape, latency, failure modes
rule: always structured enough to drive real UI branches
ux_enhancer:
focus: skeletons, motion, empty states, layout polish
goal: demo-credible, not merely functional
# reflection — “does it still feel like one app?”
reflection_pass:
consistency:
- KPIs and time series agree (e.g. same snapshot / generatedAt story)
ux_realism:
- Latency + occasional errors + Retry exercise the real flow
polish:
- Dark analytics read, skeletons, accessible error region
persistence_story:
- Tracked accounts: actions survive refresh (e.g. localStorage) without contradictions
End-to-end flow
The loop is simple; agentic coding pays off when you run it every time—not only on “big” tasks.
flowchart LR
userReq[UserRequest]
orch[Orchestrator]
ui[UIBuilder]
state[StateManager]
api[MockAPI]
ux[UXEnhancer]
merge[MergeOutputs]
refl[Reflection]
out[Deliverable]
userReq --> orch
orch --> ui
orch --> state
orch --> api
orch --> ux
ui --> merge
state --> merge
api --> merge
ux --> merge
merge --> refl
refl --> out
Plan → Execute → Review → Refine
↑___________________|
(reflection closes the loop)
Steal this shape for your own repo
You only need a thin stack: one manifest-style rules doc, a few skill stubs, subagent one-pagers, and a reflection checklist. If you use Cursor, wire that layer into your session before the editor fills with components—for example, a Plan Mode block that pulls in the same manifest, skill names, and subagent paths so the model loads constraints first.
<!--
Example session preamble (trim to taste)
CONTEXT:
- Frontend-only; no real API or DB.
- Follow the project manifest (rules): simulated data, consistent state, UX realism.
- Use skills: component-generation, dashboard-design, mock-data, state-patterns.
- Delegate mentally or explicitly: UI Builder, State Manager, Mock API, UX Enhancer.
- Finish with reflection: consistency, believable errors/retry, polish, persistence.
OUTPUT:
- Components + hooks + mock API shapes + UX notes in one coherent story.
-->
Where this lives in the tree
Everything above is documented in-repo; this block is just a map.
agentic-coding.md # this essay (agentic layer walkthrough)
AGENTS.md # rules (constitution)
.agents/orchestrator.md # coordinator + delegation
.agents/subagents/*.md # UI, state, mock API, UX
.agents/reflection.md # last-pass checklist + project specifics
.agents/skills/*/SKILL.md # playbooks
README.md # runbook + Plan Mode prompt example
Small app on purpose: a sandbox where agentic coding—rules, skills, subagents, reflection—stays visible and easy to copy into your own projects.
Note: This project isn’t exactly the one discussed here, but it works on a similar idea. You can check out the details and source code on GitHub:
Top comments (2)
The 75/25 split framing (agentic layer vs runnable app) is the insight most teams miss. They obsess over getting the model to write better code in a single prompt, when the real leverage is in the orchestration — rules that prevent drift, skills that encode reusable patterns, subagents that decompose complex tasks.
The reflection step is particularly undervalued. In my experience, having the agent review its own output against explicit acceptance criteria catches 60-70% of the issues that would otherwise surface in human code review. It's not perfect, but it dramatically reduces the review burden.
One pattern I'd add: context windowing. As agentic workflows get longer, the model's effective context degrades. Structuring the workflow so each subagent gets a focused, minimal context (rather than the full conversation history) is the difference between agents that work on toy examples and agents that work on real codebases.
Thank you so much for this valuable feedback comments like this truly mean a lot. In my next post, I'm planning to dive deeper into context engineering, because in agentic coding, as the number of subagents grows, losing context and over-engineering become genuinely common pain points worth exploring in detail. Really appreciate your contribution!