Let's say I have two vectors a and b and I am going to get the result vector c[i]=a[i]*b[i]. Each part of a and b are constructed in different processors. For each part I'm going to use scaterv to distribute the elements of vector to all processors and then use gatherv to collect each local vector in the current processor. Then I'm going to use MPI_Allreduce to sum up all vectors in all processors and now all processors have the final result. (all vector in different processors and result vector are in the same size).
I don't know why when I run the code it stucks. The problem starts when I include myrank in gatherv command line.
use gatherv to collect each local vector in the current processor.
No, collect the scattered data back onto root, not myrank in the MPI_Gatherv call. It's a master-slave relationship (are we allowed to say that any more?). All processors need to issue this call, but most of them will be sending chunks of data, not receiving it.
If all you are going to do is sum the data then it is questionable if you need an MPI_Gatherv statement. You don't need to send it back to root for that.
Thanks for your answer. I read in the MPI Guidelines that it can't be other processors than root. I understand what you are saying but I'm looking for a way to do something like this
The vector a and b are as follows,
These two vector are created in part in different processes.
Now in each processor I need to do multiplication operations. I want something that distributes the data for each processor between all processors and does a[i]*b[i] and then collect into a vector in the current processor then sum them all up in all processors.
How can I do something like this? If I collect them in root processor how can I sum them?
You are already summing them with the
MPI_Allreduce( ... MPI_SUM ...);
Each processor will do its own sum The "reduce part" will combine those sums. The "All" part will then distribute the total sum to all processors.