MPI Printing result

I have a vector of vector and I sent it to all processors.
for example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
                      main()
                         {
                         if (myrank==0)
                                {
                                MPI_Send();
                                 }
                                 else
                                 {
                                MPI_Recv();

			     printf("process %d prainting vector:\n", myrank);

			     for (int i = 0; i < vector.size(); i++)
			           {
				 for (int j = 0; j < vector[i].size(); j++)
				    printf("%d \t", vector[i][j], myrank);
				    printf("\n");
			         }

                          }

but it print nothing. how can I see the matrix in each processor?
Last edited on
1:32: warning: ISO C++ forbids declaration of 'main' with no type [-Wpedantic]
 In function 'int main()':
3:30: error: 'myrank' was not declared in this scope
5:42: error: 'MPI_Send' was not declared in this scope
9:42: error: 'MPI_Recv' was not declared in this scope
11:56: error: 'printf' was not declared in this scope
13:29: error: 'vector' was not declared in this scope
20:1: error: expected '}' at end of input

You say that you have a vector, but we can't see it.
(For example, if vector.size()==0, then the loops have nothing to do.)

What do the MPI_Send() and MPI_Recv() do?
The code is long. That's why I just explain my question. It's not a real code in the above question I just want to know other processor can do some prints or just the master processor can do these kind of staff?
Last edited on
I just want to know other processor can do some prints

Now that is a question.

Imagine that something starts a process in remote machine that has no monitor, no keyboard, no terminal, no shell. To where are the stdin and stdout of that process connected to?

A process could open a file and write to it. Note though that if there are multiple processes that share filesystem, then each should have a unique file; appending text asynchronously to common file is mindboggling at best.
@resabzr,
If you want to send anything to all processors use MPI_Bcast, not MPI_Send.
Or, if you want to distribute parts of it use MPI_Scatter or MPI_Scatterv.

Either way, you have been told multiple times that you can't send a vector of vectors directly, you can only send a 1-d contiguous array.

Yes, every other processor can write to the screen (or anywhere else), but you have no control over the order in which they do so, so this is potentially a shambles.

Your "pseudocode" gives insufficient information to answer your question.
@lastchance thanks for your answer, I understood what you said in response to my first question about send and receive when you gave me an example of sending flatten vector. Here I wrote a function, inside the function I flattened the vector of vector into 1D vector and then I send it to other processor. also I have another function for receiving which will be called in other processor. Now each processor received one flattened vector and again inside this function I am using push back to make the vector of vector. after this step I want to print the vector of vector. now I am asking why I don't see anything on screen when I write the print code in other processor.
I am not using MPI_Bcast and Scatter because I need to each part of vector of vector the different processor.
Last edited on
Sending to ALL processors. Note that, despite the MPI_Barrier call, you cannot guarantee the order of output.


Example using Microsoft MPI: https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi
Compile and run commands given in the output for anyone who wants to try it out with the g++ compiler. (I have batch files set up to simplify these commands for a given .cpp file)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#include <iostream>
#include <vector>
#include "mpi.h"
using namespace std;

int main( int argc, char* argv[] )
{
   int rank, nproc;
   MPI_Status stat;

   MPI_Init( &argc, &argv );
   MPI_Comm_size( MPI_COMM_WORLD, &nproc );
   MPI_Comm_rank( MPI_COMM_WORLD, &rank  );
   int root = 0;

   vector<double> A;
   int n;

   if ( rank == root )                           
   {
      A = { 10, 20, 30, 40, 50 };                                    // Only root knows A and n
      n = A.size();
   }

   // ALL processors must make these calls
   MPI_Bcast( &n, 1, MPI_INT, root, MPI_COMM_WORLD );                // Send size from root to ALL
   A.resize( n );                                                    // Should have no effect on root
   MPI_Bcast( A.data(), n, MPI_DOUBLE, root, MPI_COMM_WORLD );       // Send data from root to ALL

   for ( int p = 1; p < nproc; p++ )
   {
      MPI_Barrier( MPI_COMM_WORLD );                                 // ONLY USE FOR DEBUGGING PURPOSES (like this)
      if ( rank == p )
      {
         cout << "Processor " << rank << " received " << n << " pieces of data: ";
         for ( double x : A ) cout << x << " ";
         cout << '\n';
      }
   }
   MPI_Finalize();
}


C:\c++>g++ -I"C:\Program Files (x86)\Microsoft SDKs\MPI\Include" -o test.exe test.cpp "C:\Program Files (x86)\Microsoft SDKs\MPI\Lib\x64\msmpi.lib" 

C:\c++>"C:\Program Files\Microsoft MPI\bin"\mpiexec -n 24 test.exe

Processor 22 received 5 pieces of data: 10 20 30 40 50 
Processor 21 received 5 pieces of data: 10 20 30 40 50 
Processor 14 received 5 pieces of data: 10 20 30 40 50 
Processor 23 received 5 pieces of data: 10 20 30 40 50 
Processor 7 received 5 pieces of data: 10 20 30 40 50 
Processor 1 received 5 pieces of data: 10 20 30 40 50 
Processor 9 received 5 pieces of data: 10 20 30 40 50 
Processor 17 received 5 pieces of data: 10 20 30 40 50 
Processor 18 received 5 pieces of data: 10 20 30 40 50 
Processor 20 received 5 pieces of data: 10 20 30 40 50 
Processor 19 received 5 pieces of data: 10 20 30 40 50 
Processor 16 received 5 pieces of data: 10 20 30 40 50 
Processor 3 received 5 pieces of data: 10 20 30 40 50 
Processor 10 received 5 pieces of data: 10 20 30 40 50 
Processor 2 received 5 pieces of data: 10 20 30 40 50 
Processor 13 received 5 pieces of data: 10 20 30 40 50 
Processor 8 received 5 pieces of data: 10 20 30 40 50 
Processor 15 received 5 pieces of data: 10 20 30 40 50 
Processor 6 received 5 pieces of data: 10 20 30 40 50 
Processor 12 received 5 pieces of data: 10 20 30 40 50 
Processor 11 received 5 pieces of data: 10 20 30 40 50 
Processor 5 received 5 pieces of data: 10 20 30 40 50 
Processor 4 received 5 pieces of data: 10 20 30 40 50 
Last edited on
Topic archived. No new replies allowed.