I'm currently working on a lattice Boltzmann code (D3Q27) employing MPI for parallelization. I've implemented MPI 3D topology for communication, and my code snippet handles communication as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
|
void Simulation::Communicate(int iter) {
int tag_xp = 0;
int tag_xm = 1;
int tag_yp = 2;
int tag_ym = 3;
int tag_zp = 4;
int tag_zm = 5;
MPI_Status status;
if (SubDomain_.my_right_ != MPI_PROC_NULL) {
std::vector<double> send_data;
for (int k = 0; k < SubDomain_.my_Nz_; k++) {
for (int j = 0; j < SubDomain_.my_Ny_; j++) {
if (SubDomain_.lattice_[SubDomain_.my_Nx_ - 2][j][k] == nullptr) {
for (int dir = 0; dir < _nLatNodes; dir++) {
send_data.push_back(0.0);
}
}
else {
for (int dir = 0; dir < _nLatNodes; dir++) {
send_data.push_back(SubDomain_.lattice_[SubDomain_.my_Nx_ - 2][j][k]->m_distributions[dir]);
}
}
}
}
std::vector<double> recv_data(send_data.size());
MPI_Sendrecv(send_data.data(), send_data.size(), MPI_DOUBLE, SubDomain_.my_right_, tag_xp,
recv_data.data(), recv_data.size(), MPI_DOUBLE, SubDomain_.my_right_, tag_xm,
MPI_COMM_WORLD, &status);
int index = 0;
for (int k = 0; k < SubDomain_.my_Nz_; k++) {
for (int j = 0; j < SubDomain_.my_Ny_; j++) {
for (int dir = 0; dir < _nLatNodes; dir++) {
SubDomain_.lattice_[SubDomain_.my_Nx_ - 1][j][k]->m_distributions[dir] = recv_data[index];
index++;
}
}
}
}
if (SubDomain_.my_left_ != MPI_PROC_NULL) {
std::vector<double> send_data;
for (int k = 0; k < SubDomain_.my_Nz_; k++) {
for (int j = 0; j < SubDomain_.my_Ny_; j++) {
if (SubDomain_.lattice_[1][j][k] == nullptr) {
for (int dir = 0; dir < _nLatNodes; dir++) {
send_data.push_back(0.0);
}
}
else {
for (int dir = 0; dir < _nLatNodes; dir++) {
send_data.push_back(SubDomain_.lattice_[1][j][k]->m_distributions[dir]);
}
}
}
}
std::vector<double> recv_data(send_data.size());
MPI_Sendrecv(send_data.data(), send_data.size(), MPI_DOUBLE, SubDomain_.my_left_, tag_xm,
recv_data.data(), recv_data.size(), MPI_DOUBLE, SubDomain_.my_left_, tag_xp,
MPI_COMM_WORLD, &status);
int index = 0;
for (int k = 0; k < SubDomain_.my_Nz_; k++) {
for (int j = 0; j < SubDomain_.my_Ny_; j++) {
for (int dir = 0; dir < _nLatNodes; dir++) {
SubDomain_.lattice_[0][j][k]->m_distributions[dir] = recv_data[index];
index++;
}
}
}
}
}
|
I have the same structure for the communication between front-back and up-down.
While I can verify that communication occurs correctly by printing sent and received data, upon visualization, it appears that the data might not be transferring to neighboring processors as expected, despite not being zeroed out (as previously confirmed through printing). After each iteration, I visualize the velocity components obtained via the Lattice Boltzmann Method (LBM). My observation reveals that the fluid dynamics are solely resolved within the processor featuring the inlet boundary condition, while all other processors exhibit a velocity of zero. This suggests that data transfer to neighboring processors might not be occurring as expected.
I have a couple of concerns:
Could data corruption arise from blocking communication?
Is diagonal communication necessary? My understanding is that if communication in the normal directions (x, y, and z) is established, diagonal communication implicitly occurs.
Additionally, I'm uncertain about the order of communication. Do all communications happen simultaneously, or is it sequential (e.g., right and left, then front and back, then up and down)? If they're not simultaneous, would diagonal communication be required?
I'd appreciate any insights to clarify these points of confusion. Thank you!