for(i=0; i<=0xD6; i++) instead of for(i=0; i<=214; i++)
Just wondering, as I delve into ASM this question becomes ever more important. Obviously the computer works with binary, so in some sense using any of these other number systems must be efficient in a way -- but if I have the capability of storing the same values as decimals, and them being handled the same way by the computer, why wouldnt I? its not like writing out the same number in a different system will lower the amount of memory or processing power it takes to handle the number -- I understand that if I were to be dealing with a value that corresponded to something more than just a number -- then binary may be useful if it could represent more than one thing at once. but say I wanted to do a math operation -- why would I ever go through the painful experience of dealing with a number system I am not used to -- even on the bit level? its not like you cant use decimal numbers in 32 bit asm or C/C++ or any other language. I can see this being a bit more useful in say, an 8 bit system, or even a 16 bit system -- where memory is REALLY at a low, and the few bits of memory you can save by using powers of 2 represented in binary (and its shorthands octal and hex) would be the difference between not being able to process the numbers, and being able to do so with no problems. can I somehow get around limitations of integers, and memory problems having to do with them by using large strings of the computers native binary? like, if I keep adding binary values on to the stack, then processing them (add subtract multiply etc), will it be more difficult to overflow the stack than it would be by throwing decimals at it? it doesnt seem likely, because the numbers, even if in the computers native system, should still take up the same amount of ultimate 'space', unless working with decimal takes a harder toll on teh systems memory. |
Like if I took existing C++ programs, and converted all the math to binary and only fed it back to the user in decimal output to the screen, would it somehow net me a massive efficiency improvement and allow me to work with larger tables of numbers before overflowing the stack? |
a + 100
gets translated to a + 1100100b
. Computers are binary, so it doesn't make sense to do anything else.is the same true of ASM -- or is it simply a MUST in ASM for some reason |
why would I ever say something like for(i=0; i<=0xD6; i++) instead of for(i=0; i<=214; i++) both represent the same thing, at least if there is not something somehow specifically corresponding to the hex/binary value on the bit level that somehow make it more than just a simple 214. 214 is easier for me, as a human, to handle. so why wouldnt I just say 214? |
11010001010110011110001001101 0x1A2B3C4D 439041101 11010101010110011110001001101 0x1AAB3C4D 447429709 |