Integers are the most fundamental data structure in computing—if we can even call them a "structure." Our job as programmers is to give meaning to these numbers. No matter how complex the software: in the end, it’s just an integer, and your processor only understands integers.
If we need negative numbers, we invented two's complement. If we need fractional numbers, we create a sort of scientific notation and — boom — we have a float. At the end of the day, there's no escaping from zeros and ones.
Little History of Integers
In C, the int
is almost the natural type. Although compilers might complain, with a few flags here and there, most will allow you to write something like this:
main(void) {
return 0;
}
Technically, this is the same as:
int main(void) {
return 0;
}
This behavior comes from a time when it was common sense to assume that, if the programmer didn’t specify a type, it was reasonable to default to an integer.
C was designed with this idea in mind. Initially, int
didn’t have a standard size. The PDP-11 processor — the machine for which C was originally created — used 16-bit addressing. So it was assumed that it made sense for an int
to also be 16 bits. The idea was that the size of int
would grow as processors evolved.
The Mysterious Size
This approach created some problems. If the size of int
varies between platforms, programs compiled for different processors could behave differently. This broke the idea of C being an "agnostic" language that compiles to diverse architectures.
Unlike int
, the char
, for example, always had a well-defined size: 8 bits, signed. Despite its name, char
is not an abstract type for text characters; it’s just an 8-bit number. For example, the literal 'a'
is converted at compile time to the number 97, plain and simple.
And what about other types, like short
and long
? The idea was straightforward:
short <= int <= long
Compiler implementers had complete freedom to decide the specific sizes.
ANSI C (1989) Brings Some Order
With the ANSI C standard, some rules were established:
-
char
: at least 8 bits -
short
: at least 16 bits -
int
: the size of ashort
or larger (16 or 32 bits) -
long
: at least 32 bits
This organization helped, but the size of int
remained confusing, to say the least. Things improved with the C99 standard, which introduced the stdint.h
header.
Now we have fixed-size types:
-
int8_t
: 8 bits -
int16_t
: 16 bits -
int32_t
: 32 bits -
int64_t
: 64 bits
From then on, it was up to the compiler to implement this header with fixed-size types.
The Current State of Integers
Today, with modern compilers like GCC and Clang, sizes are more predictable:
Type | Size |
---|---|
char |
8 bits |
short |
16 bits |
int |
32 bits |
long |
64 bits (32 bits on 32-bit systems) |
long long |
64 bits |
Although long long
is still somewhat peculiar, at least it brings some consistency (I even find long long
stylish, to be honest).
What to Do?
Today, we are well-equipped with headers like stddef.h
and stdint.h
. Use int
only where necessary, like for the return type of the main
function. For anything beyond prototyping, prefer using fixed-size integers from stdint.h
, and for array indices or loops, use size_t
from stddef.h
. I hope this spares you some headaches down the road.
Thanks for making it this far — see you next time!
Top comments (0)