DEV Community

Cover image for LLM: Predictability VS Determinism
Jay Fox
Jay Fox

Posted on

LLM: Predictability VS Determinism

If you search for “how to make an LLM deterministic,” you might find advice like:

“Set temperature to 0, fix the seed, use top-p = 1 or top-k = 1.”

This mixes up two separate ideas: determinism vs predictability.

  • Predictability: the model tends to give similar outputs because it’s “playing it safe” (low temperature, top-k/p limits).
  • Determinism / reproducibility: the model gives the exact same output every time, which only happens when the seed is fixed.

Think of the seed like a Minecraft world seed: it doesn’t make the landscape “more likely,” it just makes it repeatable. Same seed + same prompt = same output, every time.

Options like Temperature, Mirostat, top-k, top-p… control style, variety, and “wildness”. They can make outputs more predictable in practice (low temperature = less surprising tokens), but they do not guarantee reproducibility. The seed is the only knob that truly locks the path.

In other words: you can have a wild, creative response that is fully replayable if you fix the seed. That’s why reproducibility in LLMs is really about the seed, not temperature.

Example using Python Ollama:

    ollama.chat(
       ...
       options={"seed": 42 }
    )
Enter fullscreen mode Exit fullscreen mode

TL;DR

  1. Temperature, top-k, top-p, Mirostat → control style and predictability, not determinism.
  2. Seed = true reproducibility. Want the exact same output every time? Lock the seed.

Try it yourself and share your findings!

Top comments (0)