confusion to understand Dynamic memory in C++

C++ dynamic memory says, we will allocate memory what we need right? so in sense it's create one run time, like we can get input form user the size we need and then allocate memory.

Like so here i create memory of one element,

int *x = new int[1]
then this allocate memory array size of 1 element, which will allow only one element, but here i write program, it works for 100 elements too why?

1
2
3
4
5
6
7
8
9
10
11
12
13
    #include <iostream>

int main() {
    int *x = new int[1];
    for (int j = 0; j < 100; j++) {
        x[j] = j;
    }
    for (int k = 0; k < 100; k++) {
        std::cout << x[k] << std::endl;
    }

    return 0;
}

Here's is the complete code link http://cpp.sh/8oyb7

This above program give error in MS visual Studio, but work in other compiler, like CLang, GNC C++, Cmake etc.

Sorry, if question is stupid but i want to clearly understand this concept. Thanks you so much.

Last edited on
> it works for 100 elements too why?
Because works is really "works".

Never confuse "it works" with "bug free".

It's basically luck because you only call new once, and you never call delete. You trash the memory pool with your buffer overrun, but you typically only find out when the next call to new/delete comes along.

As soon as you start adding more allocations, and start freeing the memory after you're done, you're on a very slippery slope to disaster.

For example.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#include <iostream>
int main() {
    int *x = new int[1];
    for (int j = 0; j < 20; j++) {
        x[j] = j;
    }
    int *y = new int[1];
    for (int k = 0; k < 20; k++) {
        std::cout << x[k] << std::endl;
    }
    delete [] x;
    delete [] y;
    return 0;
}

This produces garbage output, because the 2nd new claimed what should be it's by right.

Whereas this crashes and burns on exit, because the internal pool structure for the y pointer was trashed by overrunning x.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#include <iostream>
int main() {
    int *x = new int[1];
    int *y = new int[1];
    for (int j = 0; j < 20; j++) {
        x[j] = j;
    }
    for (int k = 0; k < 20; k++) {
        std::cout << x[k] << std::endl;
    }
    delete [] x;
    delete [] y;
    return 0;
}


> This above program give error in MS visual Studio, but work in other compiler, like CLang, GNC C++, Cmake etc.
Yes, that's a sure sign your code is wrong.
If all three compilers produce code that gives you the same result (actually 6 if you test both debug and release builds), then your code is in pretty good shape.


t works for 100 elements too why?
It does not. You're writing into memory that is not yours. It is undefined behavior. So it is not predictable what happens during execution. That's probably the worst case scenario for a program. These bugs are hard to find and the crash may happen later.

For dynamic memory the best would be to use std::vector or std::string.
This above program give error in MS visual Studio, but work in other compiler, like CLang, GNC C++, Cmake etc.

The program does not work with any compiler.

MS visual studio (in debug mode?) generates additional code for some checks. Additional code makes program slower, but does spot your logical error.
The other compilers trust that you know what you are doing.

With std::vector you can choose between "trust me" and "check me":
1
2
3
std::vector x( 1 );
x[ 2 ] = 42; // undefined behaviour
x.at( 2 ) = 42; // give error 

(MSVS debug build probably has checks in x[2] too.)
Topic archived. No new replies allowed.