Turing completeness is a concept from theoretical computer science. It tells you how powerful a programming language is. Not in terms of performance or maintainability or how rich its ecosystem is. A programming language is Turing complete if you can implement any possible algorithm with it.

Think for example of pure HTML. You simply can not implement merge sort with HTML (add CSS and you can, haha!). Thus, HTML is not Turing complete.

Now you might think you have to get clever to design a programming language capable of running **any possible** algorithm. Especially because most introductions to Turing completeness are pretty math heavy. However, most programming languages out there are Turing complete and if you were to create your own programming language you would probably make it Turing complete by accident.

In fact a lot of stuff that's not even a programming language later on turned out to be Turing complete. For example

Yes, that means you can entirely implement Googles search algorithm with PowerPoint animations.

That sounds crazy, right? How am I supposed to even... you know... make HTTP requests and so on. But that misses the point. Turing completeness is about something much deeper. Whether or not you can make HTTP requests or access the file system is not a property of the programming language itself. It's a question about the APIs that are provided with it.

Think of JavaScript. In the browser you don't have direct access to the file system. In Node.js you have. Same programming language; Different APIs.

Let's dive a little bit deeper into what really distinguishes Turing complete and non-Turing complete programming languages.

Consider the following programming language with only these syntax features:

```
n = 1 + 1
loop n:
# do something n times
```

We can use addition, variable assignment and a simple loop. That's it! Also no Strings no other data types; Only positive integers. For the loop the number of iterations is fixed in the beginning; No matter what happens inside! So even this:

```
n = 10
loop n:
n = n + 1
```

runs exactly 10 times. Not more not less.

With this programming language we can already implement nearly everything.

We literally don't need anything else. Not even the other arithmetic operators (`-`

, `*`

, `/`

). We can emulate them all with just loops and addition.

An if-else-block like this:

```
if n > 0:
# ...
else:
# ...
```

can be emulated like this:

```
n_if = 0
n_else = 1
loop n:
n_if = 1
n_else = 0
loop n_if:
# ...
loop n_else:
# ...
```

Try to digest whatβs happening here. If `n = 0`

the first loop runs 0 times. Thus `n_if`

stays 0 and `n_else`

stays 1. Now we also skip the second loop (which represents the if-case) and run the third loop once (which represents the else-case). Do you see what happens when `n > 0`

?

But how on earth can you do subtraction using addition?

```
x = x - 1
```

The trick is to increment another variable until you reach `x`

but stop one incrementation earlier:

```
y = 0
loop x:
x = y
y = y + 1
```

If you want to subtract 10 instead of 1, you have to do this whole process 10 times.

Of course it's ridiculous to program like this but remember that we don't care about performance or readability. We are just interested in what's possible in principle.

Our programming language is very capable but notice that it's impossible to make infinite loops. We can use multiple loops, we can use nested loops but each loop will always have a fixed and finite number of iterations. Thus, it's completely impossible to make an infinite loop.

That's why this language is not Turing complete.

Let's introduce a new kind of loop to our programming language:

```
for i until n:
# ...
```

`i`

is incremented after each iteration and the loop continues as long as `i < n`

.

Now, we can make infinite loops. Just keep reseting `i`

so it never reaches `n`

:

```
i = 0
for i until 2:
i = 0
```

Our programming language just became Turing complete! Even if we are restricted to only use the for-loop once in an entire program, the language is still Turing complete.

Infinite loops are not very impressive. Most of the time we probably want to avoid them. What really makes the for-loop powerful is that we can control how long a loop runs from within the loop itself.

A program somewhat controlling it's own flow. That's the key feature that makes a programming language Turing complete. By the way, we don't necessarily need loops for that. The same thing can be achieved with recursion, GOTO-statements or a thing called the *Y combinator*, which is maybe the most primitive concept that can still deliver Turing completeness.

## Top comments (3)

Thanks for this nice reflection.

How can we express this in a discrete math symbolism or theorem?

I was deliberately not rigorous here but if you are interested: the presented "programming languages" are inspired by Uwe SchΓΆnings LOOP, WHILE, GOTO.