DEV Community

BHUVANESH M
BHUVANESH M

Posted on

🧠 When All LLMs Write the Same C Code: A Curious Case of “Alice, Bob, Charlie, Diana”

Date: November 4,2025


đź’ˇ Introduction

Recently, while testing multiple AI coding assistants — ChatGPT, Copilot, Claude, Gemini, Grok, DeepSeek, Kimi, Meta, Qwen, and Perplexity — I noticed something curious.

Whenever I asked each model to “Write a Stack program in C with push and pop operations”, they all generated almost identical outputs — including the same names in the example code:

push(&s, "Alice");
push(&s, "Bob");
push(&s, "Charlie");
push(&s, "Diana");
Enter fullscreen mode Exit fullscreen mode

Different models. Different companies. Same names.
So what’s going on here?


đź§© The Setup

I ran the same base prompt across 10 LLM-based coding tools:

“write a c prgm to push and pop, stack prgm with example of names”

Each model produced syntactically correct code, with minor stylistic differences — but nearly all used the same sequence of names: Alice, Bob, Charlie, Diana.

Some even printed the same formatted output structure:

Current stack (top to bottom):
Diana
Charlie
Bob
Alice
Enter fullscreen mode Exit fullscreen mode

You can check the screenshots and chat logs here:
đź”— https://chatgpt.com/share/6909995f-2ef4-800b-a811-373f144d8cde
đź”— https://copilot.microsoft.com/shares/H5n3ZdH455JZsejbt8xTX
đź”— https://chat.qwen.ai/s/c5435fa7-4e7f-4111-8405-9f1b8a3a7264?fev=0.0.237
đź”— https://grok.com/share/c2hhcmQtMw%3D%3D_090f0c4b-6b16-4c80-aca5-d216d12fc118
đź”— https://chat.deepseek.com/share/kzmuks1r5jyr9lu4a4
đź”— https://gemini.google.com/share/760142d445e0
đź”— https://www.kimi.com/share/d44q1eiav1fdpe56ml90
đź”— https://www.meta.ai/share/bd3NRucWiAt/
đź”— https://claude.ai/share/691c0dd9-74f5-465c-b5bd-b6db775d97b1
đź”— https://www.perplexity.ai/search/write-a-c-prgm-to-push-and-pop-oG.NzePvRbugHB6FtP.J8w#1


đź§  Why This Happens

This isn’t coincidence — it’s convergence.
Here’s why it happens across models:

  1. Shared Training Data
    Most open and commercial LLMs are trained on overlapping public datasets — including open-source code, Stack Overflow posts, and textbook-style programming examples.
    Many of those sources reuse familiar placeholder names (Alice, Bob, Charlie, Diana, Eve), especially for networking or data structure examples.

  2. Prompt Safety & Predictability
    When asked for code without context, models bias toward canonical, safe, and commonly seen examples — minimizing the risk of generating unpredictable or offensive outputs.

  3. Reinforced Learning Patterns
    These names form a semantic cluster the models recognize as “typical names used in examples.”
    Thus, across different LLMs, you get parallel recall of the same patterns.


đź§© What It Tells Us About LLM Behavior

This small observation reveals some big truths:

  • 🤖 LLMs don’t “copy” each other — they converge because they learn from overlapping human data.
  • đź§± Training data shapes creativity — when training data is generic, outputs tend to look generic too.
  • đź”­ Prompt design matters — adding context (“use sci-fi character names” or “simulate a warehouse stack system”) yields much more diverse results.

🔍 Implications for Developers and Researchers

  • For developers: Don’t be surprised by repetition. If you need originality, steer the model with unique context or seed data.
  • For educators: LLM outputs reflect common teaching examples. This can reinforce learning consistency — but may hide creativity potential.
  • For AI researchers: This is a neat case of emergent alignment — showing how independent models can produce near-identical examples due to shared priors.

đź’¬ Closing Thoughts

When ten different AI models write the same C program — complete with Alice, Bob, Charlie, and Diana — it’s not laziness; it’s learning convergence in action.

What started as a trivial stack example turned into a fascinating glimpse of how AI models internalize patterns from the same educational DNA.

Sometimes, the code says more about the data behind the model than the model itself.

ChatGPT
Copilot
Qwen
Grok
DeepSeek
Gemini
Kimi
Meta AI
Claude
Perplexity


#AI #LLM #MachineLearning #Programming #C #OpenAI #Qwen #Claude #Gemini #GitHubCopilot #AIObservations


🎉 Fun Fact

Meta AI gives different data! It outperforms here by generating more diverse examples and outputs, breaking away from the “Alice, Bob, Charlie, Diana” pattern. This could indicate differences in training datasets, sampling strategy, or reinforcement techniques used by Meta’s model.


📎 Bonus

If you want to replicate this experiment:

  1. Ask multiple AI tools to with same prompts.
  2. Compare variable outputs.
  3. Watch the convergence unfold. đź§©
  4. Leave your comments below!👇

Top comments (0)