Passing by reference in the parallel

I have a simple function which works well and pass the value by reference called poli_intersecting_return. I am trying to use that function in parallel using the code below but without success, I am always getting 0. Could someone see the mistake?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
    bool
Box:: check_poli_intersection_return (Poli& poli,
                                             int32_t& im)
{
    auto ptr_poli = _poli.data();

    auto ptr_poli = &poli;

    auto npol = static_cast<int32_t>(_poli.size());

    bool check = false;

    int32_t  imm;

#pragma omp parallel shared (ptr_poli, ptr_poli, npol, check, imm)
    {
#pragma omp single nowait
        {
            for (int32_t i = 0; i < npol; i++)
            {
                if (check) break;

#pragma omp task firstprivate(i)
                {
                    if (ptr_poli->poli_intersecting_return(ptr_poli[i], imm))
                    { 
#pragma omp atomic write
                        check = true;
#pragma omp atomic write
                        im = imm;
                    }
                }
            }
        }
    }
    return check;
}
Do you get a different result if you remove the omp parallel processing directives?
Yes. The serial analog function works well.

1
2
3
4
5
6
7
8
9
10
11
12
13
bool
UnitBox:: check_polymer_intersection_return (Polymer& polymer,
                                             int32_t& im)
{
    for (int32_t i = 0; i < _polymers.size(); i++)
    {
        if (_polymers[i].polymer_intersecting_return(polymer, im))
        {
             return true;
        }
    }
    return false;
}
I meant, if you have the same exact code, but without the #pragma omps
Hmmm you are right, it does not work.
But why? That kind of structure was the only structure that at least runs parallel and I was successfully using it for other functions, where no pass by ref value was needed.

EDIT: ok, I see the mistake. Thank you a lot for a good advice not to think about parallelism. Good point, probably a lot of experience :)
Last edited on
Topic archived. No new replies allowed.