Depends on what you want: the 16 highest bits, or bits 17 to 32.
I'm not really experienced in the deeper magics of computers, but it could be different things:
Some implementations differ from environment to environment. There are some guarantees in definitions (minimum sizes for certain types, as well as some relations between types), but all the rest can differ from platform to platform. If there is no guarantee for the size of WORD and DWORD, or no guarantee for the relationship between the two, you can't use any assumptions like "sizeof(WORD) == sizeof(DWORD)/2". I imagine the latter is a guarantee, but I'm not sure.
Secondly, the meaning of bit shifts could differ from place to place. Again, I'm not sure if there's a guaranteed implementation, but when I was reading the Wiki* page on bit shifts, I discovered there are several types of shifts, which can have an entirely different effect (i.e. the inserted bit's value may differ). By only taking the 16 last bits, you ensure that bits 17 to 32 have no effect. This way, you don't need to assume that 17 to 32 will be 0.
*
http://en.wikipedia.org/wiki/Bitwise_operation#Arithmetic_shift