Assuming unsigned ints are 32 bits in size, I think it is obvious [is it?] that this is bad:
unsignedint x = (1 << 32);
But is this no different and also bad:
unsignedint x = ((1 << 32) - 1);
?? Since the temporary value is outside the range of unsigned ints. As you can tell, I want x to have all 1's. #define a constant seems safer, but I'm just wondering if the above is a bad idea.
I have been using it for some time and it seems to be ok, but I'm wondering if I've just been lucky and the compilers I've been using have just been accepting of what I'm doing...
Thank you all for your replies! What I was doing was useless, as I had suspected... And thank you for giving me 4 options -- none of which I had thought about. I'll be sure to give them a try; thanks!