An array definition I'm unfamiliar with

Hi all,

I've been around C++ for a while but I'm trying to understand some code in C and there's some syntax I can't figure out.

Ok, so I have a file with some functions and at the beginning it has (Real is a defined struct)

static Real *holder=NULL

and then later inside a function call,

1
2
for (i=i0-extra; i<=i1+extra; i++) {
    holder[i] = pG->U[k0][j0][i].B1c;


holder doesn't show up anywhere else getting initialized or anything when I search through the code.

So this bothers me for two reasons:

1) I (naively, apparently) thought that you always had to define an array's size before you even thought about its elements

2) what if i0-extra = 4 and i1+extra = 7? Will this array only have indices 4, 5, 6, and 7? Or will it waste memory by adding indices 1, 2, and 3 anyway?

Finally, I want to use (for GPU computing) a call equivalent to malloc (cudaMalloc) to allocate the necessary memory. But am I going to walk into trouble with this if I try to allocate an array with indices 4, 5, 6, and 7 only? I guess I could just modify everything that calls that array instead but it's a big program and I'd prefer to not have to go on a wild safari.

Thanks! (and I hope I'm knowledgeable enough to have at least asked the right questions)

-CC


closed account (zb0S216C)
holder must've allocated memory at some point, or, when the function is called, the function may have assumed that holder is already allocating memory.

Could you post the entire function body?

Wazzak
Last edited on
Just a silly idea, but having done some reverse engineering myself, it is worth checking if the function is even getting called. My guess from what you've said is that it is a remnant from an earlier version, that's not actually used.

Framework is correct, if that function is used, holder must be being initialized somewhere, or you would get a segfault.

Another note, if you're porting the algorithm to cuda, it will be much more sensible to start from scratch, as it's a completely different way of thinking - logarithmic reductions for example, have no equivalent. You won't get the speed benefits if you just translate directly from C/OpenMP.

I don't know if nVidia are still doing it, but I went on a two day course with them about programming in cuda a couple of years ago. It was very good. It's a shame I don't use it though as I've completely forgotten just about all of it.
Topic archived. No new replies allowed.