DEV Community

Cover image for Can LLMs Power Randomizers?
Butter
Butter

Posted on

Can LLMs Power Randomizers?

The type of randomizer I'm referring to is something that can randomly return an item from a given category like music, movies, food, etc.

For example, can I prompt ChatGPT “return me a random music artist” and get a truly random recommendation? And if I do this repeatedly will I get a different artist every time?

Uhh no not at all, that’s not how they work. Instead, about a third of the time it will return the artist Tame Impala.

LLMs tend towards the average of their training data, so apparently Tame Impala is the most strongly correlated with people's idea of a “random” artist.

This is no bueno… I want true random, I want that niche shit! To try and solve this I’ve devised a simple prompting algorithm that can increase the output pool to thousands… nay TRILLIONS!

It takes advantage of the recursive structure of branching subclassifications. For example music is easily subclassified by genre. So the algorithm would work like this:

Ask an LLM API to return every subgenre of music, then (in code) randomly select an outputted subgenre and have the LLM subclassify that too. And simply repeat!

What this does is recursively subclassify each randomly selected genre branch until you are multiple depths down the tree at a highly specific genre (like Cowpunk).

After reaching the final depth of classification THEN you ask for examples of music artists in that genre. See diagram:

Genre tree diagram

Due to the exponential nature of these classification trees, they grow to have thousands of possibilities at the final depth. I’ve found this method varied enough that it truly does feel quite random and can INDEED power a randomizer.

I used this strategy to build one that I’ve been using to find new music and interesting new dishes from all around the world among other things.

Try it out yourself over at lifeisrandom.io

Does this matter?

Fuck I dunno but it’s cool. I imagine having LLMs run recursive algorithms like this can be used to largely expand their creativity and the variety of their outputs.

This could be helpful for uses of LLMs where variety is more interesting e.g. brainstorming, LLM personalities, storytelling, and much more…

BONUS: Why not just adjust the temperature?

LLM APIs often have a “temperature” parameter which is specifically for controlling the randomness of the model’s output. Set it to 0 and it will return the same thing every time, set it to 2 and it maximizes the creativeness/randomness of the LLM outputs.

Sooo what if we had just ran the prompt “return me a random music artist” but at a higher temperature?

To test this I ran that prompt 100 times each at every temperature creating this wonderful graph of total Tame Impala outputs vs temperature.

Tame Impala vs Temperature graph

This graph shows that at its most random (temperature 2) it was effective at lowering us to 5 Tame Impalas.

But that is NOT good enough! A true randomizer should have an output pool in at least the thousands meaning Tame impala (or any specific artist) should rarely show up repeatedly.

Top comments (0)