DEV Community

Christopher Smith
Christopher Smith

Posted on

OOP-Camp 4.1: The Secret Lives of Primitive Types

In the Beginning, The Nerds Made 1 and 0 And Saw that it Was Good

I used to tell my high school students that what programmers did was literal magic: We inscribed runes on rocks, pushed lighting through it, and as a result, I could be called new slurs on social media by a college students in bangladesh. Its incredible what abstraction can do for you as you. We'll talk more about the idea of abstraction later, but I want to focus on the "lighting" bit.

At the heart of a computer is a very simple idea: That either a certain part of the computer has electricity going through it, or it doesn't. 1 or 0, On or Off. In a lot of ways, it's like a 1 x 1 lego piece: You can put something on top of it, or something underneath it. However, like the humble 1x1, it doesn't do much.

10 Add in a second 'bit' and you can now do 4 operations. 101 Add in another one and you get 8.

You can represent this mathematically as a sequence [2^n -1] where n is the number of bits and -1 allows you to have 0 be a represented number. Get up to 8 bits 10110100 and you have 1 byte. Why 8 bits? Here's a good video explaining why, and this video goes a "bit" deeper into the backstory. Computerphile is a great "lunchtime video" resource and I recommend his interviews with other famous computer scientists like Brian Kernighan to the idea at hand, regardless of how we feel about the byte, it is the core standard in most computer, and we often measure our CPUS by how many bytes they can process at a time: x32 is a 1-byte register in a cpu, x64 is 2-byte, x86 is 3-byte and so on and so forth. However, at the time of the invention of C, 1-byte was the standard, meaning you had 255 values (outside of zero) that you could represent. Similarly, memory was (and still is, depending on the cpu type) in 1-byte intervals.

Now, we don't want to dive too heavy into computer architecture or computer systems, but we do need to cover enough so that you can grasp this idea: That our most basic data is represented directly in memory as a byte, or a few bytes. When we talk about Primitive types we are referring to the most simple, basic data that we can extract a direct binary or numerical value for from the memory register.

To really demonstrate what I mean, fire up your favorite IDE for Java or C# and paste the following code for either C# or Java:

Java:

public class FooBar {
    public static void main(String[] args) {
        char fooChar = 'N';
        int barInt = 10;

        if (fooChar == 'N' && barInt == 10) {
            System.out.println("Everyone was Kung FooBar Fighting!");
        }

        int fooBar = Character.getNumericValue(fooChar);

        System.out.println("fooChar " + fooChar + " is equal to int type " + fooBar + ". Hiyah!");
    }
}

Enter fullscreen mode Exit fullscreen mode

C# with Top-Level Statements:

char fooChar = 'N';
int barInt = 10;

if (fooChar == 'N' && barInt == 10)
{
    Console.WriteLine("Everyone was Kung FooBar Fighting!");
}

int fooBar = int.Parse(fooChar.ToString());

Console.WriteLine($"fooChar {fooChar} is equal to int type {fooBar}. Hiyah!");
Enter fullscreen mode Exit fullscreen mode

When you run it, you will see that our char type has a numeric equivalent. This is because our primitive types are stored in memory directly as machine code (or bytecode for Java and C# if you want to get granular). Something else about these primitive types is that they can't be broken down into distinct, smaller parts. I say distinct because you can technically break down an int into a series of 8 bits, but at this point, there is nothing making 1 bit unique from any other bit. Once you group them together into something like a byte does this collection become somewhat more unique. Therefore, I say that a primitive type is any data type that cannot be meaningfully broken down into something smaller that is more unique or distinct than the data type itself.

To go back to food: You can take a cookie (or biscuit if you prefer) and break that down into its ingredients: sugar, egg, milk, flour, etc. but you can't really meaningfully break down the ingredients into their component molecules without losing the ingredient's distinctiveness.

Of course, this conversation really only works when thinking about a language like C/C++, Java, Rust, Go, C# and not a language like Python, Ruby, or Javascript. These languages have "primitives" but they are actually abstract, compound data types that behave in a primitives way. While this isn't a bad think necessarily, we can't use them as good example to discuss primitives as they really are.

I. Declare. VALUES! (I didn't say it, I declared it!)

In languages like C, C#, and Java, you have to declare a primitive then assign it something that matches the type.

int fooBarBaz;


fooBarBaz = 42;
Enter fullscreen mode Exit fullscreen mode

In the modern day, most languages let you do this all in one go: int fooBarBaz = 42; but it is still common to see a variable or property or field declared, then assigned later. At this point, we can start to dig more into terminology, so don't be scared when I use word you might not be familiar with. Just know I'm slowly warming you up to some new words so that we can use them later on.

When we assign a value to a variable, we are giving it a state or condition of being. Just like we have discussed previously, anything can have a state and that state can change based on conditions:

int stateIsCool = 37;

int changingStateIsFun = stateIsCool + 10;

stateIsCool += changingStateIsFun;

Enter fullscreen mode Exit fullscreen mode

What we have done here is created 2 int variables: stateIsCool and changingStateIsFun. the first is assigned 37 and the other is assigned the value of stateIsCool PLUS 10, making it 47. We then can set stateIsCool to a new value: the current value of stateIsCool PLUS the value of changingStateIsFun.

we can represent it numerically like this:

int stateIsCool = 37;

int changingStateIsFun = 37 + 10;

stateIsCool = 37 + 47;
Enter fullscreen mode Exit fullscreen mode

Why would we want to do this? The first is it allows us to have more complex behavior, for example when I am playing a popular first-person shooter game and we are picking maps, it will give us options to vote on, then load in the map that wins. that can only happen when we want to change the state of an existing value. The second is this: changing a state is just a matter of doing mathematics: GPUs do linear algebra to display a screen or the video game you are thinking about playing, CPUs do lots of arithmetic to change the value of bytes. Without this, you don't have a computer and thus no way to read what I wrote (Which might be a blessing in disguise for you actually!)

So when you are writing code, whether it is functional, OCAML code, or procedural C code, or OOP-based Java code, go ahead and start to think about your program as a series of explicit state changes. For now, we are only doing simple changes, but as we go further and further out into abstraction, the more interesting our changes can be.

With this in mind, let's go ahead and see how we can build better lego pieces in order to make cooler and cooler creations!

Up Next: 4.2. Entering the Compound-Making New Data Types Out of Primitives
How do we get Strings? What are they really? Can we group together primitives into something meaningful? What is the meaning of life the universe and everything? All this, and more, answered in the next post!

Top comments (0)