unexplained behavior/unexpected result

Consider the following code. It exhibits undefined behavior when I install the code below to to the bottom block marked : cout<<"this block added & below-->program UB";
1
2
3
4
delete p2;
	 cout<<"this block added & below-->program UB";
	 p2 = 0;//
     cout<<" ----------p2 to be deleted and neutralized as a null pointer"<<endl; 
.
I am pretty sure its operator error, but its been a while on pointers. It works fine up to delete p2.

I can only rid the code of the behavior by restarting the machine, and editing it, and building it again. No amt of editing to the file works until reboot. I could leave the affected file with just int main() and it will still associate with the error. Thanks for any help.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#include<iostream>

using namespace std;

int main()
{
	cout<<"hello"<<endl;
	double *p_ptr = new(double);
	double * p1 = new(double);
	*p1 = 345.43;
	double * p2 = new(double);
	p2 = p1; //two pointers to same value
	cout<<*p2<<endl;
	cout<<"----------------------------fine to here"<<endl;

	delete p1;
	p1 = 0;						//neutralize pointer
	cout<<"--------------------add in p1 delete, null ptr here, fine"<<endl;
	delete p2;
	 cout<<"this block added & below-->program UB";
	 p2 = 0;//
     cout<<" ----------p2 to be deleted and neutralized as a null pointer"<<endl;
     	 	//
    return 0;
}
Last edited on
Um, what? You're leaking some memory (on line 12 when you reset your only pointer to the memory allocated on line 11) and double-freeing other memory (since line 12 points both p2 and p1 at the same place, then delete both [16, 19]).
I don't see what your question is here. You've written some bad code that exhibits undefined behaviour by deleting the same allocated memory block twice (the memory allocated on line 9 is deleted twice), and has two memory leaks in (the memory allocated on lines 8 and 11 is never deleted). What do you want us to do about it?

Is your question "why is this undefined behaviour"? Or is it "how am I deleteing the same memory twice"?

Last edited on
1
2
3
4
5
6
	
double *p_ptr = new(double);
double * p1 = new(double);
*p1 = 345.43;
double * p2 = new(double);
p2 = p1; //two pointers to same value 

Say hello to your new friend, Memory Leak: https://en.wikipedia.org/wiki/Memory_leak


1
2
3
4
	delete p1;
	p1 = 0;	
	cout<<"--------------------add in p1 delete, null ptr here, fine"<<endl;
	delete p2; //p2 isn't pointing to a valid item anymore 

You've already deleted p1. p1 and p2 are pointing to the same thing, and since the thing p1 was pointing to was just deleted a few lines ago, the thing p2 is pointing to is no longer valid.
You've made another friend, Dangling Pointer: http://stackoverflow.com/questions/17997228/what-is-a-dangling-pointer

If you try to 'delete' a dangling pointer, it won't go over well. An exception will be thrown and your program will be terminated due to a memory access violation.

Why do you want two pointers to point to the same value? It's generally a good idea to avoid this. If you must share ownership, consider smart pointers.

You're also not deleting all of your memory. You forgot about 'p_ptr'. That and the integer that p2 was originally pointing to was leaked.

All of this which was said by the two above. I should just refresh the page with my comment copied to check if others said the same thing as me.
Last edited on
Hi,

First up, note that all of what I am about to say falls into the category of: Most of the time it's not advised to do these things, but there are some circumstances where experts do have a need for them. Some other things that fit into the same category are: globals, goto.

The big question here for me is: Why are you using pointers / new / delete with C++ at all? See below about C programming.

IMO, there is no need for any of those in modern C++, in fact they create problems. Namely code not reaching a delete because of an exception. This is beside forgetting to match the new and delete.

Instead, so much can be done with the STL containers: The memory management is done, there is bounds checking, and heaps of other really helpful facilities such as the algorithms. References are also a big thing, they are better than a raw pointers.

With polymorphism, it needs a container of pointers, but one can't directly have a container of references, so there is std::reference_wrapper to achieve this. It essentially wraps a reference in a pointer.

This all comes about because of the evolution of the languages. In C there was malloc and free, then C++ eventually had new and delete . Then the problems with those became apparent, so it was better to use smart pointers. Smart pointers had been around for awhile, and it turned out they were a better solution. But even better was the STL, which is designed to work properly and do the right thing in various situations.

The trouble is that lots of teachers, lecturers and authors still teach new & delete.

C++ also has move semantics and perfect forwarding, so in certain circumstances one can use pass by value where a move constructor is available or copy elision is employed by the compiler.

There is heaps of good stuff to read here, unfortunately the internal links don't work:

https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md


C Progamming

This is where pointers are used a lot, but be aware C is a whole different mindset. Many of the bells & whistles that come with C++ aren't there in C: You will find yourself being responsible for doing all kinds of checking. Especially diligence with initialisation, bounds checking, detailed knowledge of how library functions work and const correctness to name a few.

So if you are keen to learn about pointers, try doing it with C, rather than using new & delete - which just leads to bad c++ habits IMO.
Ok, I cleaned up the pointers. Now the program works fine. I'm still trying to understand memory leaks: are there any left?

The trouble is that lots of teachers, lecturers and authors still teach new & delete.

STL containers, etc., are coming later. Thx for github link.

You've written some bad code that exhibits undefined behaviour by deleting the same allocated memory block twice (the memory allocated on line 9 is deleted twice), and has two memory leaks...


Yes it is nasty code, but thank you for your patience learned something from it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#include<iostream>

using namespace std;

int main()
{
	cout<<"hello"<<endl;
	
	double *p_ptr = new(double);
	double * p1 = new(double);
	*p1 = 345.43;
	double * p2 = new(double);
	*p2 = 3.14159;
	cout<<*p2<<endl;
	
	delete p1;
	p1 = 0;						
	delete p2;
	p2 = 0;
	delete p_ptr;
	p_ptr = 0;
    return 0;
}
Last edited on
Hey IDM

By graduating to more sophisticated pointer, memory management and container methods in c++ that you mention, are you losing speed by not using raw pointers, etc? Specific example: you are writing an application in C++ for processing video using the OpenCV library. OpenCV is already optimized but a lot of other code has to be written to make it all come together. Will staying away from raw pointers decrease performance? Is that a tradeoff?
By graduating to more sophisticated pointer, memory management and container methods in c++ that you mention, are you losing speed by not using raw pointers, etc?

You may or may not be, depending on the usage. For most usage of raw pointers, you have to do the bookkeeping that is automatically done for you when using another method, so you don't gain anything by using them.

On the other hand there are cases where the extra overhead isn't needed, in which case you may gain something by using raw pointers. But, you should prefer to not use manual memory management until and unless it is found to be a performance problem that needs to be addressed.
Topic archived. No new replies allowed.