so what is the use of Hex, octal or binary over decimal if you can always use decimal as a representation?

Just wondering, as I delve into ASM this question becomes ever more important. Obviously the computer works with binary, so in some sense using any of these other number systems must be efficient in a way -- but if I have the capability of storing the same values as decimals, and them being handled the same way by the computer, why wouldnt I?

its not like writing out the same number in a different system will lower the amount of memory or processing power it takes to handle the number -- I understand that if I were to be dealing with a value that corresponded to something more than just a number -- then binary may be useful if it could represent more than one thing at once.

but say I wanted to do a math operation -- why would I ever go through the painful experience of dealing with a number system I am not used to -- even on the bit level? its not like you cant use decimal numbers in 32 bit asm or C/C++ or any other language.

I can see this being a bit more useful in say, an 8 bit system, or even a 16 bit system -- where memory is REALLY at a low, and the few bits of memory you can save by using powers of 2 represented in binary (and its shorthands octal and hex) would be the difference between not being able to process the numbers, and being able to do so with no problems.

can I somehow get around limitations of integers, and memory problems having to do with them by using large strings of the computers native binary? like, if I keep adding binary values on to the stack, then processing them (add subtract multiply etc), will it be more difficult to overflow the stack than it would be by throwing decimals at it? it doesnt seem likely, because the numbers, even if in the computers native system, should still take up the same amount of ultimate 'space', unless working with decimal takes a harder toll on teh systems memory.

Like if I took existing C++ programs, and converted all the math to binary and only fed it back to the user in decimal output to the screen, would it somehow net me a massive efficiency improvement and allow me to work with larger tables of numbers before overflowing the stack? is the same true of ASM -- or is it simply a MUST in ASM for some reason (so far, it looks to me like even in x86 asm you can handle decent sized integers)

I see people saying things like 'bitmasking' but I dont really understand why or what this is -- especially since its contemporary code, and most of it is in simple for statements --

why would I ever say something like for(i=0; i<=0xD6; i++) instead of for(i=0; i<=214; i++)

both represent the same thing, at least if there is not something somehow specifically corresponding to the hex/binary value on the bit level that somehow make it more than just a simple 214. 214 is easier for me, as a human, to handle. so why wouldnt I just say 214?

im curious because there has to be some specific reason where I would WANT to go out of my way to use the hex -- some reason where using decimal numbers would either be impossible or highly inefficient.

maybe if I was working with ASM, I would quickly run out of bits in the registers, and working with hex would take less bits and be more efficient somehow? even so, large enough decimal integers seem to be possible to handle in 32 bit ASM or C/C++.

and that cant be as simple as it gets -- clearly C++ has support for hex and binary, and it cant just be a backwards compatibility thing.

so far Ive never used hex in programming. ever. never encountered a reason I had to. I kind of want to go out of my way to FIND a reason to have to use it, just so I understand its purpose, and can properly use it as a tool and not just look at it as an arbitrary representation of decimal.
Last edited on
Just wondering, as I delve into ASM this question becomes ever more important. Obviously the computer works with binary, so in some sense using any of these other number systems must be efficient in a way -- but if I have the capability of storing the same values as decimals, and them being handled the same way by the computer, why wouldnt I?

its not like writing out the same number in a different system will lower the amount of memory or processing power it takes to handle the number -- I understand that if I were to be dealing with a value that corresponded to something more than just a number -- then binary may be useful if it could represent more than one thing at once.

but say I wanted to do a math operation -- why would I ever go through the painful experience of dealing with a number system I am not used to -- even on the bit level? its not like you cant use decimal numbers in 32 bit asm or C/C++ or any other language.

I can see this being a bit more useful in say, an 8 bit system, or even a 16 bit system -- where memory is REALLY at a low, and the few bits of memory you can save by using powers of 2 represented in binary (and its shorthands octal and hex) would be the difference between not being able to process the numbers, and being able to do so with no problems.

can I somehow get around limitations of integers, and memory problems having to do with them by using large strings of the computers native binary? like, if I keep adding binary values on to the stack, then processing them (add subtract multiply etc), will it be more difficult to overflow the stack than it would be by throwing decimals at it? it doesnt seem likely, because the numbers, even if in the computers native system, should still take up the same amount of ultimate 'space', unless working with decimal takes a harder toll on teh systems memory.
Nothing prevents you or anyone from writing an arithmetic library that performs its operations on human-readable strings. E.g. mul("12", "2") would return "24".
The constraint on such a design would be time, not space. Such a library would be too slow to be useful (if you use a base different than the hardware's then you can't use the hardware to perform the basic operations for you), and it would not be capable of anything that's not already possible in base-2 arithmetic.

Like if I took existing C++ programs, and converted all the math to binary and only fed it back to the user in decimal output to the screen, would it somehow net me a massive efficiency improvement and allow me to work with larger tables of numbers before overflowing the stack?
C++ compilers already do this. a + 100 gets translated to a + 1100100b. Computers are binary, so it doesn't make sense to do anything else.

is the same true of ASM -- or is it simply a MUST in ASM for some reason
Assemblers are also capable of accepting integer literals in various bases and convert them to binary opcodes.

why would I ever say something like for(i=0; i<=0xD6; i++) instead of for(i=0; i<=214; i++)

both represent the same thing, at least if there is not something somehow specifically corresponding to the hex/binary value on the bit level that somehow make it more than just a simple 214. 214 is easier for me, as a human, to handle. so why wouldnt I just say 214?
You can write numeric literals however you want. 0xD6 and 214 both mean exactly the same thing, and to me look equally meaningless. I do think, however, that 0x80000001 is more understandable than 2147483649, although less so than (1 << 31) + 1.
Hex makes sense when you are more interested in the bit and byte values than in the numerical value.

Translating between hex and binary is straight forward compared to decimal. When I see the value 0x83 I can right away say that this is a byte with the first, second and eighth bits set to 1. The corresponding decimal value 131 doesn't give the same information.

If one of the bits change only one of the hexadecimal digits will change, but almost all of the digits in the decimal number will change.

11010001010110011110001001101    0x1A2B3C4D    439041101
11010101010110011110001001101    0x1AAB3C4D    447429709

Two hexadecimal digits make up one byte, so if I have a 32-bit variable with the value 0x80402010 I know that the four bytes has values 0x10, 0x20, 0x40 and 0x80. This fact is useful when dealing with colour codes. Usually one byte is used for each of the colours red, green and blue. The colour code for yellow is 0xFFFF00 and you can right away see that it consists of red (0xFF), green (0xFF) but no blue (0x00). If the colour value is written as decimal 16776960 it becomes much harder to read and you can't easily modify the number by hand if you want to change the colour slightly.

It is more popular to write in hex than in binary. I think the reason is because binary numbers easily gets very large and hard to read. Hex is more compact and easier to work with.
Last edited on
Topic archived. No new replies allowed.