When you enter a number, the compiler only sees its binary equivant as that is one of the first things to be turned into code.
Example, lets say I write the number "13". The compiler will recognize that as: 0000 1101 in binary.
If I write the number 015 the compiler will recognize that as octal and will see it as 0000 1101 in binary.
If I write the number 0xd, the compiler will recognize that as hex and will see it as 0000 1101.
These are all exactley the same and do not make a difference which ones you use.
Why would you use one over the other? If I am interested in the number itself, I'll use decimal. If I am packing each individual bit to represent a discrete option for a function, I'll use hex as hex lets me recognize which bits are where. I rarely use octal unless I am dealing with a strange standard when interfacing a module.
Example: I want to make a function that takes a series of options in one integer. I would do the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
|
#define OPTION1 0x00000001
#define OPTION2 0x00000002
#define OPTION3 0x00000004
...
#define OPTION12 0x00000800
void myFunction ( unsigned int MyOptions);
int main()
{
myFunction (OPTION1 | OPTION3); //The | operator lets us combine the bits
return 0;
}
void myFunction (unsigned int MyOptions)
{ // The & operator lets us mask the bits to see if it's set.
if (MyOptions & OPTION1) ... do something
if (MyOptions & OPTION2) ... do something else
}
|
Here we can clearly see that
OPTION1 | OPTION12
will create 0x00000801 which shows that 2 bits have been set. This would be harder to see in decimal.