As previously mentioned, computers store data in a 'linear' array of bytes. Now that we have covered the fundamentals of how data is stored within a computer we can now start to cover how information is encoded.
Integer: Means a real number (not complex/imaginary) that does not include any decimal places (only whole numbers).
Within this post I will only cover unsigned integers, meaning that they can only be positive.
A bit can only store a
1 or a
0 however, we can use multiple bits in conjunction to store bigger numbers. A bit like how Western Arabic numerals (0, 1-9) uses multiple digits to notate numbers bigger than 9.
We can simply count up in both binary and Western Arabic to show which binary numbers correlate, or we could think about it another way. For big binary numbers, you do not want to manually count all the way up each time you need to check the value of a big number instead we can create a little routine for our selves to make this a simple process.
- Index all of your bits starting from zero on the right and counting up.
- For each binary digit calculate the value of the bit multiplied by
2to the power of the index.
- Add all the values together.
Note: We skipped the indexing phase
|1 x 2^6||0 x 2^5||0 x 2^4||1 x 2^3||0 x 2^2||1 x 2^1||1 x 2^0|
Not all CPUs are created the same, and since integer addition is an operation baked into the CPUs physical design - depending on the CPU you are programming for your numbers may be encoded slightly differently.
When you write down a decimal number you write in little endian, meaning that the least significant or smallest digit is last. Some CPUs instead store their integers in big-endian (mainly older CPUs). Meanwhile, some of the latest lower powered CPUs (i.e. ARM) allow for bi-endianness meaning they can process numbers in either endian.
It is important to note that endianness only changes the ordering at a byte level, and not a bit level, meaning that
110 won't change into
FF00 would change into
00FF (using hexadecimal to notable bigger numbers because it would be impractical to notate 16 binary digits).