DEV Community

Cover image for Pseudo-Random Numbers in Python: From Arithmetic to Probability Distributions
Walker Harrison
Walker Harrison

Posted on

Pseudo-Random Numbers in Python: From Arithmetic to Probability Distributions

Randomness is something that we tend to take for granted in our daily lives. "That's so random!" we'll say, when someone does something abnormal or unexpected, even though there's evidence that humans can't consciously achieve randomness. For truly stochastic processes, we turn to nature: the growth of bacteria, the timing of radioactive decay, and the thermal noise created by electric currents all lay claim to a true random sequence.

There's not always some Plutonium-239 lying around every time we need to use true randomness though, so computer scientists developed close approximations called pseudo-random number generators, or PRNGs. Today these algorithms are ubiquitous in software development. As Wikipedia notes, "PRNGs are central in applications such as simulations (e.g. for the Monte Carlo method), electronic games (e.g. for procedural generation), and cryptography."

Radioactive decay is considered a truly random process.

Today I'm going to walk through a relatively simple kind of PRNG called a Linear Congruential Generator (LCG). In modern implementation, most random numbers rely on more sophisticated PRNGs, but the LCG gives us a solid foundation and will be sufficiently random for our modest intentions.

As a pre-req, you should be familiar with modular arithmetic, which is essentially just representing numbers as remainders once divided by a "modulus." In other words, numbers can increase until they reach a set modulus and start wrapping around: 5 modulo 3 is equal to 2, 25 modulo 2 is equal to 1. Most clocks are on a modulo-12 system.

For an LCG we need a modulus, m, an initial value (or "seed"), X0, a multiplier, a, and an increment, c. From there, to get from one number Xn to the next Xn+1, you simply execute the following algorithm:

Xn+1 = (aXn + c) mod m

Let's write this as a function in Python and then assign some simple values: a modulus of 10, a multiplier of 3, an increment of 1, and a seed value of 5.

def lcg(n, m, a, c, seed):
    sequence = []
    Xn = seed
    for i in range(n):
        Xn = (a*Xn + c) % m
        sequence.append(Xn)
    return(sequence)

lcg(10, 10, 3, 1, 5)
# => [6, 9, 8, 5, 6, 9, 8, 5, 6, 9]
Enter fullscreen mode Exit fullscreen mode

As you can see, we asked for ten random numbers and were given the sequence [6, 9, 8, 5, 6, 9, 8, 5, 6, 9]. Now while this passes the requirements of an LCG, this is a truly horrible PRNG. First of all, in modulo 10 there are only ten possible values a number can take on, and further more there is an unbreakable 6-9-8-5 loop in our case that further reduces our state space to just four numbers.

What we need is a much larger modulus and the assurance that every possible number will be produced once by any seed value that's input, or what's known as having "full period." Enter advanced number theory. By the Hull and Dobell Theorem, an LCG will have full period if:

  1. c and m are relatively prime (i.e., the only positive integer that divides both c and m is 1).

  2. If q is any prime number that divides m, then q also divides a – 1.

  3. If 4 divides m, then 4 also divides a – 1.

You can read the original 1962 paper proving this theorem here. Instead of trying to come up with some numbers that work, we'll steal the ones that are actually used for random number generation in some languages, including C++: m = 232, a = 22695477, c = 1.

But before trying out these new inputs, we need to address the deterministic nature of our LCG: plug in the same seed and you will get the same output, which sort of invalidates any sense of randomness that we'd claimed. This is a fundamental flaw in PRNGs in general.

However, there is an upside to their predictability — namely that by setting and sharing a seed, one can create reproducible results, which are critical in the sciences — and also a few ways around it. By tying the seed to something unseen and ever-changing, like the current number of microseconds past the second, we can reintroduce some faux-randomness. So along with our new inputs, we'll use the built-in Python datetime library for our seed:

from datetime import datetime
lcg(10, 2**32, 11695477, 1, datetime.now().microsecond)
# => [1090430687, 498347756, 2363706845, 1780651778, 1345777131,
#        826090344, 275756681, 2092763550, 396794679,3763540772]
Enter fullscreen mode Exit fullscreen mode

That looks a lot better. Usually, however, we're not so much interested in a random number between 0 and 232, but rather a random number between 0 and 1 — the standard uniform distribution U(0,1). Since this distribution is bounded at 0 and 1, it can be multiplied by larger numbers to create larger uniform distributions and also maps smoothly to probability spaces, which will come in handy later.

By dividing our output by the modulus (or, to be precise, the modulus minus 1), we can transfer our random numbers onto the line between 0 and 1. Technically, only a finite group of numbers can be outputted so we haven't achieved a true uniform distribution, where every conceivable real number between 0 and 1 is possible and there's no point mass anywhere. But this distinction is negligible with a large number like 232. Let's rewrite our LCG to spit out numbers between 0 and 1, and then graph a sequence of a thousand random numbers compared to a thousand random numbers generated by Python's built-in random() function:

from random import random
import matplotlib.pyplot as plt

def uni(n, m, a, c, seed):
    sequence = []
    Xn = seed
    for i in range(n):
        Xn = ((a*Xn + c) % m)
        sequence.append(Xn/float(m-1))
    return(sequence)

x = range(1000)
y_1 = uni(1000, 2**32, 11695477, 1, datetime.now().microsecond)
y_2 = [random() for i in range(1000)]

plt.plot(x, y_1, "o", color="blue")
plt.show()

plt.plot(x, y_2, "o", color="red")
plt.show()
Enter fullscreen mode Exit fullscreen mode

Two pseudo-random outputs: our LCG on the left and Python's built-in PRNG on the right.

On the left are the thousand random numbers graphed in the sequence we produced them, and on the right are the thousand emitted by Python's built-in random() function, which, for the record, relies on the Mersenne Twister, a relatively modern algorithm that today is the gold standard for PRNGs. While any computer scientist would tell you the second algorithm is much more robust, to the naked eye they appear comparable.

Moreover, the standard uniform distribution acts as a gateway to all sorts of other, more sophisticated distributions because it operates on the probability space from 0 to 1 (everything has somewhere between a 0% and 100% probability of happening). So as long as we can find a way to map our events to the uniform distribution, we can randomly sample from them.

As a simple example, we can simulate flipping a coin from the uniform distribution by calling any random number greater than 0.5 a heads and anything less a tails. Here's a function that takes a number of flips and returns how many heads turned up (formally, a binomial distribution with n trials and p of 0.5). We'll run a thousand trials of hundreds flips each and build a histogram with the results:

def coin_flips(n):
    flips = uni(n, 2**32, 11695477, 1, datetime.now().microsecond)
    heads = sum([i<0.5 for i in flips])
    return heads

trials = [coin_flips(100) for i in range(1000)]

plt.hist(trials, bins=range(min(trials), max(trials)))
plt.show()
Enter fullscreen mode Exit fullscreen mode

A histogram of thousand trials of our PRNG flipping a coin a hundred times and counting the heads.

As you can see, our trial data is unimodal and nearly symmetrical, which we'd expect from a binomial random variable.

Now, as we've mentioned, these PRNGs are indisputably flawed. They are deterministic and not truly uniform. Famed mathematician and computer scientist John von Neumann once said that "anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin."

Still though, I find it pretty remarkable that with addition and multiplication, the most basic mathematic building blocks, we can approximate a sample from a random variable like the binomial distribution.

Top comments (1)

Collapse
 
hydrogen2oxygen profile image
Peter

20 years ago I learned C++ only to reproduce the "The RetroPsychoKinesis Project" and now I'm a software developer for more than 17 years. There is something special about randomness!!!