Vector size

I have a function like below,
in line 49 the size of p is 6. but when I go inside while loop, when calling MPI_MxV function, p becomes zero and I don't know why. any help would be appreciated.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
	if (myrank == 0) {

		for (int ii = 0; ii < Diag.size(); ii++) {
			r[ii] = b[ii];
		}

		while ((err > tol) && (iter < iter_max)) {

			rz = dotproduct(r, z);
			pAp =dotproduct(p, Ap);
			double alpha = rz / pAp;
			double 	normR = 0.0;
			d = MPI_dotproduct(z, r);
			std::cout << b.size() << std::endl;
			for (int ii = 0; ii < b.size(); ii++) {

				x[ii] += alpha * p[ii];
				normR += r[ii] * r[ii];
			}
			double 	beta = dnew / (d + 1e-10);

			for (size_t ii = 0; ii < b.size(); i++) {

				p[ii] = z[ii] + beta * p[ii];
			}
			c = d;
			iter = iter + 1;
			err = normR * inormB;
		}
	}
}
Last edited on
in line 49 the size of p is 6


Only the root processor (myrank==0) has a clue what p is. None of the other processors run anything inside that if block.
Last edited on
Should I broadcast p?
Should I broadcast p?


The rest of the processors currently don't see anything inside your if block, so it is irrelevant whether you broadcast p or not.

I'm sorry, but this application is not one in which MPI is going to help you. You will spend far too long sending and receiving bits of data between processors. It will run many times slower than the equivalent serial version.

MPI works best when:
- you can partition your domain into essentially independent chunks (not sparse matrices);
- the amount of information sent between parts is minimal.
MPI_Send and MPI_Recv are fundamentally slow operations.

Last edited on
Yes, you are right. Thanks
Last edited on
Topic archived. No new replies allowed.