base 16 macro definitions

Sometimes I see header files in which macro definitions use base 16 numbers even when it doesn't seem to matter. Is there a particular reason for this? (maybe easier adaptation of the numbers to hardware input control bits? )
Two hexadecimal digits are equal to a byte
And being 16 = 24 using hex is simpler when working with flags

http://www.cplusplus.com/doc/hex.html
Last edited on
That's what I thought...thanks.
Topic archived. No new replies allowed.