DEV Community


Discussion on: The 8 Primitive Types in Java

orenovadia profile image

Thanks Jeremy.

I have done a little research on the matter.
First of all, even in 64bit systems, pointer addresses are of byte granularity (this is why you can get a specific char from a *char array in C). Because of this, it is possible to directly reference variables of less than 64 bit length. So using int instead of long actually saves memory.

In regards to runtime and performance: Arithmetic operations on 32 and 64 bit integers (on a 64 bit cpu) are done on the hardware (ALU). The time difference should not be significant, still, it is possible that adding two 32bit integers will be faster than longs because of some hardware optimizations.

In regards to BigInteger: I have to disagree on this one. BigInteger is an abstraction implemented on the software level. It must be represented by something like an array of integers, and therefore, arithmetic operations on BigInteger are not O(1) time and space complexity (probably logarithmic - because adding two BigIntegers requires you to compute each integer in an array of logarithmic size of N).

Thread Thread
renegadecoder94 profile image
Jeremy Grifski Author

Thanks for the follow up! I think a lot of what you shared followed my intuition—although I didn't know the details. Thanks again.

In terms of the BigInteger topic, I totally agree. I was trying to pose the question "why not always go bigger?", and you nailed the response. I figured the answer to that question would be similar to your original question.