finite element matrix assembly in csr format using MPI

I am trying to assemble the global stiffness matrix in finite element in csr format. with one processor everything is fine but with two processor the number of n_nz which is the number of non_zero element in dense matrix changes. anyone can help me with this?

Last edited on
@lastchance
Last edited on
@resabzr,
Please learn to debug - all you have to do is print out some location points to pin down exactly where your code crashes. Then print out the value of variables just before that line.

In this instance, if you change the line
double *local_NNval = { new double[n_nz] {0} };
to
1
2
        cout << "OK: n_nz = " << "  " << n_nz << "\n";
        double *local_NNval = { new double[n_nz] {0} };

then you will see exactly why it crashes on a bad allocation length.

What happens after you fix that error, I have no idea, but I expect it will unleash many more.



Please:
(1) Learn to debug. As above.
(2) Do not write vast amounts of code before you compile and run it - develop incrementally and test regularly.
(3) Do not dump vast amounts of code in this forum without any explanation what it does.
(4) Refactor your code completely - you should not need that huge number of global variables. In fact the only reasonable global variables would be the number of processors (num_procs) and the individual process number (myrank); then you wouldn't have to keep issuing introspective calls to find out what they are. Also, cut down the number of headers.
Last edited on
Topic archived. No new replies allowed.