Binary operations

I need to understand this statements:
1
2
#define LOWORD(l) ((WORD)(l))
#define HIWORD(l) ((WORD)(((DWORD)(l) >> 16) & 0xFFFF)) 


Suppose i have code:
1
2
DWORD someDwordNumber = 45001001;
WORD someWordNumber = LOWORD(someDwordNumber);


1. As i can see (from printing numbers in binary):
1
2
DWORD 00000010 10101110 10101001 00101001 
WORD  00000000 00000000 10101001 00101001


LOWORD copies last/rightmost 16 binary digits from DWORD in WORD variable, right?

2. To extract "HIGHWORD" from DWORD, i understand that we need to shift right 16 times, witch produces this:
1
2
DWORD 00000010 10101110 10101001 00101001
>> 16 00000000 00000000 00000010 10101110

but i don't understand why then it needs to do "& 0xFFFFF" part?
I'm guessing it cuts off anything beyond the 32th bit, just in case DWORD is defined as >32bits on the system.
But then if DWORD is defined other then 32 bits we need to shift more then 16 times?

Why it doesn't say:
 
#define HIWORD(l) ((WORD)(((DWORD)(l) >> sizeof(WORD) * 8) & 0xFFFF))  

assuming that WORD size is always DWORD / 2.
Last edited on
Depends on what you want: the 16 highest bits, or bits 17 to 32.

I'm not really experienced in the deeper magics of computers, but it could be different things:

Some implementations differ from environment to environment. There are some guarantees in definitions (minimum sizes for certain types, as well as some relations between types), but all the rest can differ from platform to platform. If there is no guarantee for the size of WORD and DWORD, or no guarantee for the relationship between the two, you can't use any assumptions like "sizeof(WORD) == sizeof(DWORD)/2". I imagine the latter is a guarantee, but I'm not sure.

Secondly, the meaning of bit shifts could differ from place to place. Again, I'm not sure if there's a guaranteed implementation, but when I was reading the Wiki* page on bit shifts, I discovered there are several types of shifts, which can have an entirely different effect (i.e. the inserted bit's value may differ). By only taking the 16 last bits, you ensure that bits 17 to 32 have no effect. This way, you don't need to assume that 17 to 32 will be 0.

* http://en.wikipedia.org/wiki/Bitwise_operation#Arithmetic_shift
One more question (although i was left unanswered with above ones), quote from your wiki link:

...in a right arithmetic shift, the sign bit is shifted in on the left, thus preserving the sign of the operand.

It apeirs that in their example sign bit is 0, so i can't assume if this is correct statement.

1. Sign bit is lefttmost bit digit?
 
10001010 11100010

2. If we shift signed variable compiler is smart enough to preserve sign bit right? Is this guaranteed?
Why not simply test it?
As beginner i don't like to assume things by my self as it might turn out wrong.
I just would like confirmation from experienced user about this.
Topic archived. No new replies allowed.