DEV Community

Cover image for Why High-Context Culture Makes AI Coding Harder — A Japanese Developer's Perspective
yunbow
yunbow

Posted on

Why High-Context Culture Makes AI Coding Harder — A Japanese Developer's Perspective

I'm a software engineer in Japan. I've been using AI coding assistants — Claude Code, Cursor, Copilot — for about one years now.

At some point I started keeping informal notes on how many prompt revisions it took to get production-quality output. After a few months, a pattern was hard to ignore.

For tasks I described in Japanese: 4–6 revisions on average.
Watching colleagues who worked primarily in English: 1–3.

Same AI. Same model. Roughly similar task complexity. Different language — and apparently, different results.

I started asking why.


The Most Invisible Problem in a High-Context Language

Japanese ranks among the world's most high-context languages. What that means practically:

  • Subject omission is grammatically normal. "Did it" is a complete sentence. Who did what to whom lives in context.
  • Vague expressions are socially preferred over explicit ones. "Somehow it feels off" is a complete critique in a code review.
  • Politeness layers add indirection at every level. Direct requests sound rude. Softened requests carry meaning only if you know what's being softened.

None of this is a flaw. It's a communication system that works — between humans who share context.

AI assistants do not share your context.

When I write "please handle this appropriately," I know what "appropriately" means in my codebase. My AI assistant does not. It makes its best guess, which is informed by the global distribution of code it was trained on — not by the six months of implicit decisions my team has accumulated.

The result is plausible-looking code that misses the point in ways that take hours to diagnose.


The 6 Tacit Knowledge Zones

After talking to engineers across Japanese dev teams, I found the same blind spots appearing in almost every organization:

1. Documentation that doesn't document
Formats vary by author. Files go months without updates. The real spec lives in Slack DMs and someone's memory.

2. Team rules that exist only in senior developers' heads
Coding conventions exist in practice — enforced silently in code review, never written down because "everyone knows."

3. Security as an afterthought
Security is something you consider before release, not during development. The AI doesn't know this is your team's operating assumption. It will skip security considerations unless explicitly prompted — which you won't do if you don't think about it mid-task.

4. Task management via word of mouth
The real priority queue is the one your team lead carries in their head and communicates in standup. The ticket system is a formality.

5. Testing without test cases
You test the happy path manually before shipping. Edge cases are discovered in production. The AI will generate code that matches this standard — because you didn't specify otherwise.

6. Operations knowledge locked in one person
When something breaks at 2am, one person knows what to do. That person's knowledge doesn't exist anywhere else.

Notice what these have in common: they're not the absence of knowledge. They're knowledge that lives in exactly one place — a person's head — and will disappear when that person leaves.


Why AI Makes This Worse, Not Better

The instinctive argument: "AI will fix this — it generates more documentation, writes more consistent code."

The reality is the opposite.

AI is excellent at transforming explicit knowledge into more explicit knowledge. Give it a well-defined spec and it produces well-defined code. Give it a type definition and it generates a correctly-typed implementation.

What AI cannot do is extract tacit knowledge from the environment. It cannot interview your senior engineer. It cannot observe your code review culture. It cannot infer that your team decided six months ago that useEffect for data fetching was banned after a painful production incident — because that decision exists only in the memory of the three people who were there.

When you give an AI assistant a vague prompt, it fills the gaps. It uses the global distribution of code patterns it learned during training. That distribution doesn't include your team's specific decisions.

The higher your tacit knowledge density — and high-context organizations tend to have higher tacit knowledge density — the larger the gap between what you intend and what the AI produces.


A Scenario That's More Common Than It Should Be

The following is a fictional but composite scenario based on failure patterns I've seen and read about. The system and people are made up. The mechanisms are real.

A team builds a medical appointment system. Development moves fast — AI-assisted, one month to MVP.

What gets missed in the first week: domain knowledge. The engineers don't know that insurance category codes have a specific format. They don't know that doctor license numbers have validation rules. The AI doesn't ask. No one thinks to tell it.

By month one: the same appointment conflict logic has been implemented in three different places, each slightly differently, because three developers each asked the AI to "handle booking conflicts" without knowing the others had done it. The AI gave each of them a reasonable implementation.

Security holes accumulate invisibly. Patient records are accessible without proper authorization checks because the prompt said "fetch appointments for this user" and didn't say "verify the requesting user has access to this patient's records." The AI did exactly what was asked.

The system ships. In the first week, a researcher discovers they can access any patient's records by modifying the URL parameter. The system goes offline.

When the team tries to fix it, they discover that the implicit business logic — the stuff that existed only in the product manager's head — was never written down. No one knows what the correct behavior is for edge cases anymore.

The CTO spends the next three months rebuilding from scratch, this time writing the spec before writing the code.

The lesson isn't "don't use AI." The lesson is: the bottleneck in AI-assisted development isn't writing code. It's making implicit knowledge explicit before the AI touches anything.


What Actually Shifts the Outcome

I've been working on an open-source framework called AI Dev OS that's essentially a structured answer to this problem. But the framework is less important than the underlying principle.

1. Prompt in the language that minimizes ambiguity for the task

This is uncomfortable to write as a Japanese developer, but: for technical specification, English or highly explicit Japanese outperforms natural conversational Japanese. Not because English is better, but because technical communication benefits from low-context precision.

❌  「いい感じに処理してください」
✅  「ユーザーがPOSTした画像ファイルを、
    サーバーサイドでWebP形式に変換して
    S3バケット {env.BUCKET_NAME} に保存してください。
    ファイルサイズ上限は5MB。超過はValidationErrorを返すこと。」
Enter fullscreen mode Exit fullscreen mode

2. Write down the thing that "everyone knows" before you prompt

The most valuable five minutes before any significant AI-assisted task: write one paragraph of the rules that your senior developer would enforce in code review. Not a full spec — just the implicit decisions that would otherwise live in their head and filter silently through review.

Put it in a CLAUDE.md file or .cursorrules. Make it persistent. Now the AI has access to your context.

3. Spec before code, always

Ask the AI to write the specification first: input/output types, error cases, security assumptions, business rules. Review it. Then ask for the implementation.

The friction this adds is real. The bugs it prevents are also real — and much more expensive.

4. Treat tacit knowledge as an organizational risk, not a personal style

When the person who holds the knowledge leaves, it's gone. An AI tool that depends on that person's mental model in every prompt is inheriting that fragility.

The act of writing down team conventions isn't documentation overhead. It's risk reduction.


The Question Worth Asking Your Team

What's the most important unwritten rule in your codebase right now?

Not the stuff in your linting config or your CONTRIBUTING.md. The real rule — the one a senior engineer would catch in review but that exists nowhere in text.

If you can name it, you can write it down. If you can write it down, your AI assistant can follow it.

That one rule, made explicit, is worth more than any prompt engineering technique I've found.

For what it's worth: the thing I wrote down that made the biggest difference was two sentences. "Server Actions always return ActionResult<T>. Never throw from a Server Action." Six months of drift, fixed by two sentences in a markdown file.

Start there.


I built AI Dev OS to systematize this process for Next.js + TypeScript projects. It's open source — the framework, the templates, the reasoning. If this resonated, that's where the implementation lives.

What's the unwritten rule in your codebase? I'm genuinely curious — drop it in the comments.

Top comments (0)