DEV Community

Cover image for 89th day of Haskell: meditations on structures of programs

Posted on

89th day of Haskell: meditations on structures of programs

How does a program look like in time and space?

How does a program emerge in the mind of a programmer? What is its abstract shape, like an image of a process of computation, a calculation rewrites itself from expressions to subexperssions, pass arguments as parameters, we substitute one for another, passing a function to a function, what kind of space does it consume, and what is its relation with time? Can we picture it within our minds eye, can we see and feel abstractly the whole image of its operations, like seeing a painting all at once, seeing the whole image of it upon a wall, consuming a part of our space, written on a piece of canvas, its image is two dimensional. The way a program looks like, the code we write, code that represents programs operations, its components could be compared with music. There is a notion of spacetime complexity of a program which could be compared with the way music unfolds in space. Music has a time and space component, the description of how fast and slow it moves, the beats and rhythms, and its notation, musical notes, to write it down, a code so to say, that we can share with others, that we can read, sing and play with our instruments. We could say we are the computer, we are the compiler translating the code of musical notes into the machine code soundscape of our space and time. The way music emerges in space is through combination of intervals, the tiny atomic relations between just two notes, which evolve into larger structures, which are called chords. But this is not a music lesson, just a way to show how abstract thought maps into time and space. But can a program sing? How does a program sound or look like then? How do we map its transformations and see it unfold in our minds?

What is the shape of a program, not just some program but any program. What does it even mean, to program, to describe a process of computation, how does this process, that becomes in the world after the thought looks like. How do we describe its complexity, and how come once described, it somehow begins to move on its own, it brings results, it outputs other programs which output programs, or in some cases it gives us an error, unsolved mystery.

What is a program?

Some say a program is a sequence of instructions, a series of clearly described events, tiny processes, that solve a problem. Or maybe it is a sequence of transformations, a sequence of expressions that transform some data? But then what is data, can a program transform itself without any data, can it move by itself without a problem to be solved? Maybe there are different ways of building programs, different ways we can describe the steps in play, like different perspectives, frames of reference, maybe some computations are described as abstract structures in time and space, while others are described as a sequence of commands that build these structures. How do we tell the difference? Take a moment and really ponder, how does a program look like, is program just to which we give something to solve, just to encode some business problem, publish a website, or is there something deeper within, a notion of self-replicating awareness that keeps on calculating, surprising us, interacting with us, with our environment?

Interacting with a Haskell interpreter

Haskell is a functional programming language, pure and lazy, statically typed language. It is a functional language because programs written or coded in Haskell are more than just a sequence of steps to be carried out, they are formed out of pure expressions, pure functions, functions that operate on a total space, functions that produce other functions, functions that create larger structures of functions, like a self replicating machine, like an infinity mandala, Haskell programs are beautiful mathematical abstract entities. When we say mathematical we mean symmetrical, we mean there is an underlying logic within them, that can be proofed, that can be mapped, applied, like an empty variable to any outside thing or a process we know. Functional programs do not destructively update the state of the program, they make copies of themselves when we change them, and so we can track and observe how they evolve through time, and so they consume the space of computations.

Let's begin with a simple program, with a simple expression. Let's write a number, because numbers are simple, meaning numbers are something which is common to many of us all, we count things, we count fingers, we have two eyes, we have basic understanding of a numberness of some sort. It makes no difference whether we are great or not at math, just think of a number, any number you like. Good.

For instance, let's write the number two, 2. We have a symbol 2 representing some abstract quantity, an abstract context, I am saying abstract because we do not know what the number two represents, two of which, two of what. We only have this idea of twoness, like a context space in which we can fill with names, meanings, like left and right, like sun and the moon, like two people, or maybe just a number two, 2. We could write any other number too, like 1 or 3. So into the Haskell interpreter, we write a number 2.

λ> 2
Enter fullscreen mode Exit fullscreen mode

And Haskell answers back with 2. So our conversation, our inquiry could be described as a process, a function, some relation from number 2 to a number 2, 2 -> 2. We use the arrow, -> symbol so that we can realize the connectedness of this process, something from something into something, from a variable to some variable, from an a to an a. Like a question and then an answer it is more than just 2 and some number 2, not quite like 2 = 2, we can reason about 2 -> 2 like input -> output, it's only equality being the fact that something returns what was already given. I am using the words giving and taking though the main ideas here is the idea of an identity. What could it answer to our simple inquiry? Haskell interpreter could have given us a number 3 but then we would not know if we are communicating at all. The meaning would not be referentially transparent. If I call you that means I would like to talk to you and not to someone else. I only need to save your number once, and then each time when I call you you will answer. So basically our relation is referentially transparent. We could also say that in our program if we define some variable or a function then this definition will not change, it cannot, it makes no sense to change. Otherwise we would not know any more who to trust, what is real and what is not.

I mean, if I just give you an apple without asking anything in return and you give me back a pear one time and next time you give me a lecture in mathematics what does that tell us about this exchange? It is impure, it is not clear what will I get, it is like talking to a madman, we never know what we will get. Just stating a number seems like we are not even trying to communicate anything to Haskell, and our Haskell artificial intelligence machine is merely listening, tuning in, its output repeating our input, like listening to a completely random unknown event we can only give back what was given, or like two people commenting the beauty of an event, "and the Moon is so beautiful" Alice tells Bob, still looking at the Moon. Bob replies "yes, the Moon is so beautiful". Like identifying an identity, like an identity function Identity :: a -> a which describes a temporal event, some form of knowing, represented as if I give you some a then I get back a, so I have a unique relation with oneself which could be described abstractly as a -> a. Could we write b -> b? Is there any difference between these two variables? Well sure, if all we know is a -> a and b -> b then maybe those two unknown relations do mean something. But then all we have is just a -> a and who knows maybe in some other world we would be writing * -> *. Is * -> * the same as a -> a? Syntactically it is not since we are seeing two different symbols, but the relation they represent, semantically is the same, the both represent the same meaning. Syntax is like how something looks like and semantics is like what something means.

So in the language of category theory, imagine a field, a space, an abstract category of objects. In this field, and I am only using the word field so that I can intuitively realize the category as some empty space filled with objects or even with just one object. So each object in this category has a unique arrow going from itself to itself. We call this arrow an identity arrow.

Like shooting an arrow to oneself you could do it probably once, but then you would be dead while you could shoot infinitely many arrows into something which is not you. This unique arrow that you shoot into yourself is an identity arrow, a unique morphism directed at you, an object, identifying you as a unique target, you become the source of the arrow and the target, source -> target, there is only one arrow from my name into my name, and countless more from my name into other names.

Discussion (0)