DEV Community

Cover image for Four Multipliers for Using AI Well: My Working Model
Yongsik Yun
Yongsik Yun

Posted on • Edited on

Four Multipliers for Using AI Well: My Working Model

Summary

I don’t think “good prompts” are enough. In my experience, outcomes improve when I understand what my keywords actually mean and can roughly predict what the model will return.

This post shares my current model in a report style, but the views are personal and context-dependent.


Why this matters (my take)

I keep returning to four areas that change results the most:

  1. Tool Understanding
    Knowing model limits, context management, and I/O formats reduces avoidable iteration.

  2. Requirements Understanding
    Clear problem statements, success criteria, and non-functional needs (security, performance, operations) keep direction stable.

  3. Design & Architecture Understanding
    Boundary setting, dependency control, and explaining trade-offs lower change cost.

  4. Organization & Process Understanding
    Roles, collaboration flow, deployment and operations realities increase execution efficiency.


The 4× model

I think results behave like a product of four “pillars”:

Outcome ≈
(Depth of prior learning)
× (Understanding of the problem context)
× (Design & architecture skill)
× (Environment awareness: org, infra, process)
Enter fullscreen mode Exit fullscreen mode

If any one term is near zero, the whole outcome drops sharply. That’s what I’ve observed, not a universal law.


Human ↔ AI split (what I’m experimenting with)

I’m actively designing how to split work between AI agents and myself across the four pillars. It’s a work in progress.

Pillar What it means Delegate to AI agents Keep human-led (for now)
Tool Prompt scaffolds, format transforms, test data generation Patterned refactors, doc drafting, spec-to-code skeletons Choosing models, context strategy, safety/limits
Requirements Clarify terms, map examples, validate acceptance criteria Requirement clustering, duplicate detection, glossary drafts Final problem framing, success metrics, risk acceptance
Design & Arch Option listing, RFC skeletons, sequence/state diagrams Alternative enumeration, boilerplate architectures Final boundary decisions, trade-off ownership
Org & Process Checklists, runbooks, review templates Routine updates, status summaries, meeting minutes Incentives, role design, escalation paths

Practical checklist

  • Do I know the model’s constraints and how I’ll manage context?
  • Are success criteria and non-functionals explicit?
  • Can I explain my trade-offs like I would in a design review?
  • Does the plan reflect team roles and deployment reality?
  • For each pillar, what is agent-do vs human-decide?

Closing

This is the frame I’m using right now:
Prior learning × Context understanding × Design skill × Environment awareness.
I expect efficiency to drop when any one term weakens. My current focus is to make the agent/human split explicit in each pillar.

If you use a different split or model, I’d like to learn from it.

Top comments (0)