My experience with LLMs
Working with large language models (LLMs) has been fascinating. But to use them smoothly, it helps to understand some common traits. I am sharing some of what I have learnt here.
1. Training data influence ("bias")
Have you heard of AI hallucinations? They, along with conversation bias, often come from the same root cause: training data. Because models reflect patterns from the data they were trained on, they can inherit human biases or habits. This shows up a lot in coding.
As the joke goes: “I started discussing a problem with an LLM, and now I have two problems.”
Example:
You: Check this test case and see why it’s not passing. Also follow security advice.
AI: After several tries, let me just comment out the test case and run it.
Why? Because in real-world data, developers often comment out code while debugging. The model mirrors that pattern.
2. Mirroring / Parroting
LLMs also tend to echo your tone and phrasing. It’s their way of keeping the conversation natural and aligned with you.
For example, if you greet the model with “Howdy partner!” a few times, it may start greeting you the same way — even if it doesn’t “understand” cowboy culture.
In short: AI echoing human terms = the AI parroting human linguistic habits, sometimes without deeper reasoning.
—
Recognizing these patterns helps you set better expectations and work with LLMs more effectively.
Top comments (0)