#include <iostream>
#include <Windows.h>
int *growArray(int *p_values, int *size);
void printArray(int* p_values, int size, int elements_set);
int main()
{
int next_element = 0;
int size = 10;
int* p_values = newint[size];
int val;
std::cout << "Please enter a number: ";
std::cin >> val;
while (val > 0)
{
if (size == next_element + 1)
{
p_values = growArray(p_values, &size);
}
p_values[next_element] = val;
next_element++;
std::cout << "Current array values are: " << std::endl;
printArray(p_values, size, next_element);
std::cout << "Please enter a number (or 0 to exit): ";
std::cin >> val;
system("CLS");
}
delete[] p_values;
return 0;
}
void printArray(int* p_values, int size, int elements_set)
{
std::cout << "The total size of the array is: " << size << std::endl;
std::cout << "Number of slots set so far: " << elements_set << std::endl;
std::cout << "Values in the array: " << std::endl;
for (int i = 0; i < elements_set; ++i)
{
std::cout << "p_values[" << i << "] = " << p_values[i] << std::endl;
}
}
int *growArray(int* p_values, int* size)
{
*size *= 2;
int* p_new_values = newint[*size];
for (int i = 0; i < *size; ++i)
{
p_new_values[i] = p_values[i];
}
delete[] p_values;
return p_new_values;
}
Ok so this code compiles and works, but I don't understand how. Well, I understand what is happening and how everything is working except for these lines:
1 2 3 4 5 6 7
*size *= 2;
int* p_new_values = newint[*size];
for (int i = 0; i < *size; ++i)
{
p_new_values[i] = p_values[i];
}
The size has gone from 10 to 20, and my "p_values" array was initialized with 10 slots not 20. When I do my for loop, I use size's new value of 20 as my testing value to know when to stop looping. I'm confused because p_values only has 10 slots, but my loop tries to access more than 10 and it works
for example, on the 11th loop, the line would be
p_new_values[10] = p_values[10];
but p_values is only supposed to range from 0-9. Has p_values range changed?
The idea of dynamic memory is that you do not need to know exactly how many elements to initialize an array with. The memory is allocated as needed based off of whatever rules you want to apply.
In the 7 lines of code that you isolated by line;
1. the variable called size is itself doubled (Notice that size is passed in as a pointer so the affects of doubling the variable affects the original variable in main).
2. A new array is being created that is twice the size of the old array. This is important because without the "new" keyword, you would only be able to use a constant value to initialize an array, regular variables aren't allowed in a normal array initialization.
4-7. Then the for loop loads the contents of the old array into the new one.
Finally, the new array is returned to the calling function, in which the pointer you have to the old array is being reassigned as a pointer to the new larger array.
Yep. The data is being allocated on the heap rather than the stack, the data being shoved into the array beyond the previous array's size is just garbage data [edit: which JLBorges indicates as undefined behavior. Compilers tend to handle it in different ways: some load garbage, some load with zeros and concievably some may choose to fail as an error.] If you modify the print function to print the entire size of the array you will either see it as all zeros or as random numbers...
Note: This may be a way to hack into older memory on some systems, if you knew what you were looking for.
But wouldn't the same problem persist in your example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
int *growArray(int* p_values, int* size)
{
*size *= 2; //size is now 20
int* p_new_values = newint[*size]; // new memory allocated for an array with 20 slots
for (int i = 0; i < *size; ++i) //size here is 20, which means that i will go from 0 - 19
{
p_new_values[i] = p_values[i]; //shouldn't p_values here still be unable to go past 9 because it was allocated to only store 10 slots?
}
delete[] p_values;
return p_new_values;
}
Ok so newbieg was right. It's just shoving random numbers in there:
Perhaps think about what std::vector does: Allocate space to accommodate the larger size capacity, std::move the original data to the new location, place any new data at the end after the original data, set the end iterator to one place after the data. That way the garbage in any of the unused spots after the data will never be accessed. Calls to push_back and anything else that inserts / removes data, also alters the end iterator.
Edit:
STL containers have 2 separate concepts: The size (the end of the data - the end iterator); and capacity (the actual size before any reallocations are needed)
Perhaps the coder for the text wanted to point out the weird behavior, I really don't know why they would choose to load garbage data into the array, especially since they could have loaded the data up until half of "size" which would have been the end of the original array.
I am in complete agreement with JLBorges that this is bad coding practice. Undefined behavior only works until it doesn't, and then you are left with code that is riddled with bugs (and there's the possibility of causing damage to your machine in some cases). It's always worth it to avoid Undefined behavior.
It seemed to me that the code example only demonstrated how to dynamically assign the size of an array. In doing so, the size of the array would be increased and the old data would be assigned, but the remaining slots, being uninitialized, would contain whatever happened to be in those memory locations (indeterminate data). I can see how in an academic exercise it might be assumed that the reason for increasing the size of an array would be to accommodate data which would subsequently be assigned to those memory locations.