DEV Community

synthaicode
synthaicode

Posted on

Define AI Usage by “Principles” — Not Tips, but Design to Preserve Responsibility —

Introduction

As generative AI becomes a common tool in modern teams, the conversation often centers on:

  • “The perfect prompt”
  • “How to ask effectively”
  • “Autonomous AI / AI Agents”

Yet despite this strong focus on techniques and tooling,
many real-world projects still fail in predictable ways.

Most of these failures aren't caused by weak AI performance.
They are caused by a lack of shared principles regarding how AI should be integrated into the workflow.

In this article, I outline principles for using AI without compromising human responsibility.
This is not a collection of tips; it is a framework for collaboration.


Why tips and know-how are not enough

“Tips” are fragile for several reasons:

  • Context-dependency: They often aren't reproducible in different environments.
  • Human-dependency: They collapse when team members change.
  • Model-dependency: They stop working the moment the AI model is updated.
  • False confidence: Success stories can invite dangerous misuse.

What we need are not context-dependent tricks, but conditions that do not break.


Split AI work into five phases

Collaboration with AI can be decomposed into five distinct phases.

This is not just a step-by-step procedure; it is a separation of responsibility.

  1. Request
  2. Consult
  3. Plan
  4. Execute
  5. Verify

If you blur these phases, human judgment and responsibility disappear.


Principles for each phase

1. Request: Define the Goal, Not the Solution

  • State only the goal.
  • Embrace vagueness if necessary.
  • Do not dictate the solution yet.

Avoid overthinking here.
Simply state the what you want to achieve.


2. Consult: Discuss the “How”

  • Explore multiple paths to achieve the goal.
  • Clarify assumptions, constraints, and options.

AI tends to gravitate toward a single, “neat” conclusion.
Therefore, human responsibility includes:

  • Introducing divergent perspectives.
  • Expanding or narrowing the scope.
  • Deciding what not to consider yet.

If you hand over leadership to the AI here, your original intent may be lost or distorted.


3. Plan: Use Language You Truly Understand

The most critical question in the planning phase is:

Do I actually understand this?

  • Do not allow vague buzzwords or convenient phrasing to pass.
  • The plan must clearly state what will and will not be done.

If a plan sounds “workable” but feels misaligned with your intent, ask for alternatives rather than forcing it.

Planning is for you, not the AI.
That is how you take responsibility for the outcome.


4. Execute: Manage, Don’t Just Delegate

Execution is not just “doing the work”; it is managing the alignment:

  • Is it aligned with the agreed plan?
  • Have the assumptions or scope shifted during the process?

The trap:

Sometimes the output matches the plan but diverges from your intent.
If this happens, stop immediately.
Share the discrepancy and revise the plan before continuing.


5. Verify: Restore Consistency

In the verification phase, check only one thing:

whether the original request has been satisfied as intended.

If there is a gap:

  • Identify it.
  • Apply the fix.

Verification is not a simple pass/fail judgment.
It is the process of restoring consistency between intent and reality.


Principles common to all phases

Principle 1: Protect the Context

  • Do not change the purpose, assumptions, or scope implicitly.
  • Always make phase transitions explicit (e.g., “The plan is approved. Now, let's move to execution.”)

AI cannot truly own context.
Maintaining it is a human responsibility.


Principle 2: Accept “I don’t know”

  • Do not guess.
  • Do not fill gaps with hallucinations.
  • Do not wrap uncertainty in “professional-sounding” fluff.

“I don’t know” is a normal, productive state.


Principle 3: Let AI Produce All Deliverables

Whether it is code, documents, or summaries, let the AI generate the draft every time.
The purpose is not just to save effort.
The purpose is to make misunderstandings visible.

Humans should act as editors and decision-makers:

  • Provide context.
  • Reject what is unclear.
  • Iterate until the output is convincing.

Why these are “Principles”

These principles remain valid even if models change, tools evolve, or the team grows.
They satisfy technical rigor, design thinking, and operational responsibility simultaneously.


Closing

AI is a powerful executor, but it is not a decision-maker.
Success is not about making AI “smarter” but about preserving human judgment.

These principles serve as a compass for my future self, and hopefully, for your team as well.
This article intentionally focuses on principles.
Concrete examples and practical applications will follow in a separate post.

Top comments (0)