Understanding Memory: Part 2
In Part 1, we demonstrated a basic memory model and how in order to modify variables, one must know the variable's address. In this section, we will talk about pointers vs arrays, and introduce the heap and stack.
Pointers vs Arrays
You may have heard already that pointers are arrays and arrays are pointers. First, consider the difference between value arithmetic and pointer arithmetic:
int var = 5; // var is 4 bytes starting at 0x10 var = var + 1; // var is now 6 int* ptr = &var; // ptr is 8 bytes starting at 0x14 // containing the value 0x10 ptr = ptr + 1; // ptr now contains the value 0x14
What on earth happened here? Pointer arithmetic operates in address space; the basic unit is the size of the pointed-to type. In this case the size of an int
is 4, so ptr + 1
means "advance ptr
by the size of 1 int
."
When we think of arrays, we typically picture:
int arr[4] = {1, 2, 3, 4}; // 4 ints starting at 0x20 // note that arr is an int*, 8 bytes at 0x18
We can then use the subscript operator, []
to access individual elements:
arr[3] = 6; // arr is now {1, 2, 3, 6}
But the subscript operator doesn't apply to "arrays", it applies to pointers! The following are all equivalent ways to access the memory at 0x28
:
arr[2] = 10; // subscript operator *(arr + 2) = 10; // dereference offset pointer (by 2)
The subscript operator combines offsetting, the size of the pointed-to type, and a pointer dereference into a single operation. The following equality holds:
*(arr + 1) == arr[1] == 2 // second element in array (arr + 1) == &arr[1] == 0x24 // address of second element in array
The type of arr
may be int[4]
but this type decays to int*
which means it can be used anywhere an int*
is allowed. Note that the reverse is not true; an int*
is not usable, for example, as an argument to a function taking an int[4]
, because the compiler cannot determine that the length of data pointed to is of length 4.
Stack vs Heap
So far we have been using arrays allocated on the stack, a region of memory whose creation and destruction is managed by the compiler. This memory is extremely fast to use as its allocation is static (compile time) instead of dynamic (run time) and does not incur memory allocation overhead. Also, the compiler can often significantly optimize stack allocated memory as it has more information on object lifetimes. Typically, splitting up a complicated expression into several intermediate variables does not incur a runtime performance penalty as the compiler can determine the limited scope of the intermediate variable and simply substitute the value directly into the next expression.
The other type of memory available is the heap, a set of dynamically (runtime) allocated regions of memory made available to your program by the OS's kernel. This chain of allocation from your program to the kernel and CPU is incredibly complicated (but super cool) and so won't be included here. The size of the stack varies, but ~8MB is typical. The heap on a modern computer is typically 1000x larger.
Lets see now how to make use of the heap:
int* darr; darr = malloc(sizeof(*darr) * 5); // create memory for 5 * sizeof(int) // use the memory for something // free the memory region // note that darr still points to the same address // (which is now invalid). This is called a "dangling pointer" free(darr);
The C standard library function malloc()
takes a size argument as the number of bytes to allocate, and returns a pointer to the first byte. In C, the return value is an untyped void *
that can be assigned directly to a typed pointer, such as darr
, allowing the memory to be interpreted as the desired data type. Usage of heap allocated memory is no different that of stack allocated regions. The same subscript and pointer arithmetic operations are valid.
The difference is the lifetime of the memory: stack allocated memory is implicitly managed by the compiler whereas heap allocated memory is managed explicitly by the programmer. As you can allocate memory using malloc()
, you can deallocate with free()
. This fundamental difference yields a whole aspect of software design that impacts everything from CPU design to application architectures: Memory Management. Significant engineering and theory work has been done to make memory management fast, safe, and easy to use. As an exercise to the reader, consider that the free function does not take a size argument, only the original address and what this must mean about the implementation of malloc()
and free()
.
The code sample above mentions a dangling pointer. This is a pointer whose memory has been free'd but the address is still known and the pointer can be dereferenced. The results of this are undefined; the address may be completely out of bounds and any access would cause application termination, or the address may reside in another valid region due to subsequent allocations. These type of memory errors can be very difficult to track down without proper tools.
Fundamentally, memory manipulation is the principle underlying all runtime operations of a program. In all but trivial cases, application designers must be aware of how memory is allocated, used, and destroyed and must carefully balance runtime performance vs memory utilization, especially in resource-constrained environments. Much of data structure design is based on how to manipulate memory efficiently. A solid understanding of the computer memory model and utilization of memory in an application is absolutely critical. Being able to conceptualize a routine in terms of the memory requirements, usage, and layout aids understanding and enables complex algorithm design.
0 comments: