DEV Community

Discussion on: 🚀 Demystifying memory management in modern programming languages

Collapse
 
havarem profile image
André Jacques • Edited

The statement about the stack not being faster than the heap is only in the fact that at the moment you have the address, the time to read/write is the same. The problem is that semantically, the stack addresses are known (through the Stack Pointer and Frame Pointer as well). As the author said, there is no such thing as a Heap Pointer (it just doesn't make sense). In that regard, let's say we have 2 integers in C like this:

int a = 3;
int *b;
b = malloc(sizeof(int));
*b = 4;

First of all, it is clear that creating an element in the stack is way faster than the heap since malloc needs to find a place to put the continuous 32-bits (or 64-bits) free memory. This is because the stack just needs to look up the Stack pointer, add the element and move the stack pointer accordingly.

After that, modifying an element in the stack is easy since it is basically an offset of the Stack pointer. But for the heap, first, you need to find the pointer in the stack, from that pointer you get the address in the heap and then you can read or write at that address.

And I didn't talk about the difference between reading address by offset vs full address since a lot of CPU have instruction sets that are too small to handle the full address in one instruction.

In conclusion, Stack is faster than Heap only because of the Stack Pointer.

Collapse
 
deepu105 profile image
Deepu K Sasidharan

This is what I have been trying to say, but I think you said it better :)

Collapse
 
sergio0694 profile image
Sergio Pedri • Edited

I believe there has been a misunderstanding here. If you go back and read my original comment, I've made it clear that when I said the stack was equally as fast as the heap, it was in direct read/write operations to a memory location in either of those two segments. You can't compare a read/write to a local value type variable to one to a reference variable, for obvious reasons - those are handled differently. And you can't take the overhead of a malloc call into consideration, or the garbage collection cost, that was completely besides my point, and I did mention that in my original message.

What I said was that reading or writing to a specific memory location has always the same performance, regardless of whether that memory location is situation, be it in the stack or the heap.

As a practical example. Suppose you allocate a buffer of int values on the stack, and one on the heap, and end up with two int* pointers. One points to the buffer in the stack, one in the heap. If you read/write to any of those two pointers, the operation will be exactly the same down to the assembly level, and the performance will be exactly the same too.

As another example, suppose you have a local value type variable, and you pass it by reference to another function. That function will have no way of knowing whether that reference points to a variable in some other stack frame, or to a memory location in the heap, and that makes absolutely no difference at all. Again, both cases are compiled to exactly the same instructions, it's just a pointer of some type anyway.

In conclusion, the speed of an operation is not intrinsically linked to where a value is located in the virtual memory address space, but just to how you're accessing that location.

I hope this clears things up regarding my original point on that 👍

EDIT: just to clarify, you do have a pointer lookup in both cases though. If you have a pointer to a heap location eg. in %ecx and want to read that value, you'll do movl eax, [ecx]. If you want to read the value at the top of the stack, you'll do movl eax, [esp]. In both cases the operation is exactly the same and you do have an address lookup just the same. So you saying that you never look addresses up when accessing the stack is just not the case. You're correct in saying that if you have a pointer on the heap stored on the stack you need two dereferences there, sure. But even in the best case scenario on the stack, you still need at least one anyway. And if you have a pointer in a given register, reading from that location has exactly the same cost no matter if it points to somewhere in the heap or the stack.

Thread Thread
 
deepu105 profile image
Deepu K Sasidharan

Your last edit is exactly the point, from a language point we care about cost of operation when we talk about speed, and the cost of operation for the stack is lower than heap and that is why we say stack is faster than the heap. The actual speed of writing/reading from any part of memory indeed is the same but for heap, there are more operations involved than stack when writing/reading value making the cost of operation higher for the heap.

Thread Thread
 
havarem profile image
André Jacques
Suppose you allocate a buffer of int values on the stack, and one on the heap, and end up with two int* pointers.

That is not true, since in the stack everything is offset of the stack pointer that is in the CPU registers. So, for example in ARM CPUs, it will take something like 3 instructions to retrieve any of the int in the buffer sitting in the stack. On the situation of the one on the heap, the first element will take something like 4 instructions since you need to set a register with the address you will find on the stack (the pointer of your heap buffer), and AFTER the other elements will be read AS FAST AS the stack since the other int element of the buffer are an offset of that value. And like I said earlier, if you have the address as a label, you will need 2 instructions to set your register before LDR/STR that memory block.

I think what you are trying to say is that at the time you have the address loaded in a register, the read time will be the same, which is true. The danger is that people might believe it is always the best solution to use the heap since it is so large, but doesn't account for the way it is handled. Embedded systems are very sensitive to this considering low frequency (very so often below 100 MHz), no OS and small memory footprint. The good practice, in that case, is only to use the heap with object allocated at startup so there is NO malloc anywhere, making sure the system stays deterministic. I know that in a 12-core i9 CPU clocks at 4GHz, the impact on most of the system people work (except OS for that matter, or video games) will be so negligible.