Free heap block modified after freed

So, I've seen a few (okay, quite a few) posts about this. I know that it means I'm referencing pointers that have been deleted - I'm pretty sure I even know what is wrong with my code. Sadly what I do not know is how to correctly fix it.

The goal of the project is simple learning (re-learning, at that, as I've been away from my compiler for several years).

The project is to create a simple feed-forward artifical neural network. Once I get the concept working correctly, I'll play around with the structure to make it do something useful.

The implementation is to create a neuron class, then create x number of arrays of them, then link layers to other layers depending on where they are logically in my mind.

I have a class... 'Neuron'. Each neuron contains your typical stuff... inputs, weights, and an output.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Neuron{
public:
	Neuron(int type);
	~Neuron();
	float GenerateOutput();
	bool LinkInput(Neuron* Source);

	bool SetInput(float input);
private:
	Neuron* inputs;
	float* inputWeights;
	float output;
	int inputCount;
	int MyType;
	int myRand;
};


The way I was hoping to implement the inputs, was to create an array of Neurons (which in my code is simply Neuron *inputs;).

Within my main function, I create the layers, and attempt to link the layers (this code is truncated, does not include all layers)
1
2
3
4
5
6
7
8
9
	Neuron* InputLayer = (Neuron*)calloc(Layer_Input_NodeCount,sizeof(Neuron));
	for (int i=0;i<Layer_Input_NodeCount;i++)
		InputLayer[i]= Neuron(Node_Type_Input);
	Neuron* HiddenLayer = (Neuron*)calloc(Layer_Hidden_NodeCount,sizeof(Neuron));
	for (int i=0;i<Layer_Hidden_NodeCount;i++){
		HiddenLayer[i]=Neuron(Node_Type_Hidden);
		for (int j=0;j<Layer_Input_NodeCount;j++)
			HiddenLayer[i].LinkInput(&InputLayer[j]);
	}


And this is where I think my problem lies... I think that the pointer I'm passing into the below function is going by the way side after the function returns? Is this correct?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
bool Neuron::LinkInput(Neuron* Source){
	if (inputCount>=Node_MaximumInputs){
		cout<<"Maximum Input Count has been reached\r\n";
		return false;
	}
	if (MyType==Node_Type_Input){
		cout<<"Cannot link node to an Input\r\n";
		return false;
	}

	inputs[inputCount++]=*Source;

	return true;
};
think that the pointer I'm passing into the below function is going by the way side after the function returns?
Sorry, I can't understand what you mean ¿could you rephrase it?

¿Did you reserve space for inputs? I suggest to use a container, like std::vector
You are storing a copy, that's bad. You want to link the layers, you should store the pointers.

It may be a good idea to create a layer class, as all the neurons in the same layer share the input.
To rephrase the requested,

If I create a variable in a function, say as below:
1
2
3
public void blah(){
	int i=1;
}

The variable "i" gets automatically obliterated when the function returns... at least this is my understanding?

Relevant to my post, my question is whether *Source in the LinkInput function remains a valid pointer after the function returns.

You asked about reserving space for inputs, this is my constructor, where I allocate the maximum size right on creating the neuron:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Neuron::Neuron(int type){
	MyType=type;
	srand(time(NULL));
	if(type!=Node_Type_Input){
		inputs = (Neuron*)calloc(Node_MaximumInputs,sizeof(Neuron));
		inputWeights=(float*)calloc(Node_MaximumInputs,sizeof(float));
		for (int i=0;i<Node_MaximumInputs;i++){
			inputWeights[i]=(float)rand()/RAND_MAX;
		}
		inputCount=0;
	}
	else if (type==Node_Type_Input){
		inputWeights=(float*)calloc(1,sizeof(float));
		inputWeights[0]=(float)rand()/RAND_MAX;
		inputCount=1;
	}
	output=0;
	myRand=rand()%10;
};


You mention creating a layer class; I wanted to avoid this, so as to increase the flexiblity and allow for recurrent ANNs without re-coding (for example).

Lastly, you mention that I'm storing a copy rather than linking... I had thought I was linking! Could you please let me know what I should be using as a parameter in my LinkInput function, and how I should be referencing that parameter when I call the function from main?
Yes, when a variable goes out of scope, it gets 'destroyed'.
my question is whether *Source in the LinkInput function remains a valid pointer after the function returns.
*Source is not a pointer, but a Neuron
Source is a pointer.

So you should have
1
2
3
4
5
6
7
8
class Neuron{
  std::vector<Neuron*> inputs; //storing pointers (if you prefer, Neuron **inputs;)
//...
bool Neuron::LinkInput(Neuron* Source){
//...
	inputs[inputCount++]=Source; //adding a pointer
	return true;
};
The link will remain valid as long as the pointed object exists. (as long as you don't free(InputLayer); )

IIRC: free() will not call the destructors. So you will have memory leaks.
You may want do use new[] delete[] or a container, instead.
Last edited on
Thank you - I do prefer Neuron **inputs over the use of a vector... would rather learn the pointers and how to use them properly first.

I implemented the following changes:
1
2
3
4
5
6
7
8
9
10
11
12
class Neuron{
//...
private:
	Neuron **inputs;  //previously Neuron *inputs;
//...
}

bool Neuron::LinkInput(Neuron* Source){
//...
	inputs[inputCount++]=Source; //previously inputs[inputCount++]=*Source;
//...
};


Stepping through the debugger, one line at a time, it's interesting... the exception happens at the bolded/underlined line, during the second iteration of the top for loop (controlled by variable i):
1
2
3
4
5
6
Neuron* HiddenLayer = (Neuron*)calloc(Layer_Hidden_NodeCount,sizeof(Neuron));
	for (int i=0;i<Layer_Hidden_NodeCount;i++){
		HiddenLayer[i]=Neuron(Node_Type_Hidden);
		for (int j=0;j<Layer_Input_NodeCount;j++)
			HiddenLayer[i].LinkInput(&InputLayer[j]);
	}

I guess is an issue with the destructor and the assignment operator.
HiddenLayer[i]=Neuron(Node_Type_Hidden); That constructs an object, copy it and destroys the temporary.
If you are making a shallow copy, that means that the reserved space was freed before you used.
After reading up on shallow vs deep copies, do you then think that if I were to define an = operator, which I use to copy the individual values in, would work?

for example:
1
2
3
4
5
6
7
Neuron& Neuron::operator=(const Neuron &SourceNeuron){
	output=SourceNeuron.output;
	MyType=SourceNeuron.MyType;
	for (int i=0;i<SourceNeuron.inputCount;i++)
		inputs[i]=SourceNeuron.inputs[i];
	//....carry on for all other values in the class....//
};
There is another issue. calloc() doesn't call the constructor, but set the memory to 0.
Meaning that the pointers are pointing to NULL.

new does call at the constructor, but as you don't have a default constructor an array creation will get a little more complex (you will end emulating std::vector)

Another possibillity is to use a default constructor that does nothing, and later use an init() method. But that is error prone.
Ah, so if I understand you correctly, the following is happening:
1) I allocate memory... say.. 10 bytes (no idea what it actually is) with calloc
2) I assign a neuron to the first element in the array
3) I try to assign a neuron to the second element of the array, only to find that because my Neurons have dynamically assigned memory and are ending up bigger than they were expected to be in the calloc, my array is out of allocated space and overflowing.

Further google searching, now that I know more about what to search for, has shown that what I am trying to do in its current form is impossible.

You had suggested using vectors earlier, which I suspect I'll have little choice about now. As I understand it, vectors manage their memory dynamically?
EDIT:
Nevermind... -> instead of .

I'm in the process of implementing vectors now, but I'm at a bump and I'm surprised at how little there is on Google... probably means I'm searching for the wrong terms.

If I want to call the member function of one of the objects in my vector, say for example:
1
2
3
4
5
6
	vector<Neuron*> HiddenLayer;
	for (int i=0;i<Layer_Hidden_NodeCount;i++){
		HiddenLayer.push_back(new Neuron(Node_Type_Hidden));
		for (int j=0;j<Layer_Input_NodeCount;j++)
			HiddenLayer[i].LinkInput(&InputLayer[j]);
	}


How would I do so? My compiler claims that the HiddenLayer[i] is not a class type, when I try and call LinkInput();

EDIT:
Nevermind... -> instead of .
Last edited on
A few things.
If you use dynamic allocation inside your class you will need to program: the copy constructor, the assignment operator and the destructor. (as a rule 'if you need any of them, you need the three')
Or you will risk at memory leaks and double deletes. Or you could use smart pointers or containers that already handle that.

An std::vector works as a dynamic size array. It has a capacity, if you try to exceed it, the vector will reallocate all its members. That could cause troubles with links, if the object that you were pointing was reallocated then the pointer is invalid.
You can avoid reallocation if you reserve() enough space.

1
2
3
4
5
6
7
vector<Neuron> HiddenLayer; //storing objects
HiddenLayer.reserve(Layer_Hidden_NodeCount); //to avoid reallocation
	for (int i=0;i<Layer_Hidden_NodeCount;i++){
		HiddenLayer.push_back( Neuron(Node_Type_Hidden) );
		for (int j=0;j<Layer_Input_NodeCount;j++)
			HiddenLayer[i].LinkInput(&InputLayer[j]);
	}
Here you want the ownership of the neurons (if the container dies, all the members are killed)

1
2
class Neuron{
  std::vector<Neuron*> inputs; //storing pointers 
Here you don't have the ownership. You just need links, but you are not responsible for the objects.
There is no risk of invalid links (if all were well constructed), if all the 'neurons' live as long as the 'network'.
I do prefer Neuron **inputs over the use of a vector... would rather learn the pointers and how to use them properly first.


Why? It's like learning how to bleed brake lines or gap spark plugs before learning how to drive a car. While theoretically beneficial, it won't get you on the road any faster.
Last edited on
Ne555, thank you for your assistance - I'm running along with the vector now, implementing your suggestions - they make sense.

Cubbi, you're right that a vector will get me on the road faster, to a working program. But it won't really increase my knowledge (except in truth I am learning about vectors, so that's not too bad). My logic is "vector, do this for me" involves less learning than manually coding things. Re-inventing (or reverse engineering) the wheel, so to speak, to further my understanding.

The program itself has no purpose yet... I'll probably think of something, possibly market analysis (during which point I'll re-code it and use CUDA), but as of now it's just an ANN without a training dataset.
Topic archived. No new replies allowed.