|
|
char array[] = { 171, 205, 239, 186, 5, 6, 7 };
char array[] = { 0xab, 0xcd, 0xef, 0xba, 5, 6, 7 };
Correct me if I'm wrong: The smallest unit possible to allocate is 1 byte, so : 002BF560 - points on some byte 002BF561 - points on some byte+1 |
Good question, simple answer :-) Try using multiple digits instead, and the answer will reveal itself. char array[] = { 171, 205, 239, 186, 5, 6, 7 }; or char array[] = { 0xab, 0xcd, 0xef, 0xba, 5, 6, 7 }; |
I think in the 21st century, a byte is always 8 bits. |
Because these processors have 16 bit char size, there is some special code in bits.c to handle the packing. |
Yes Oseri, if the "cout <<" had forced decimal output, OP would have never had this question. |
Just checked out your link, I didn't see any references to 32 or 64 bit bytes |
it knows nothing of 8-bit or 16-bit values since each address is used to point to a whole 32-bit word, not just a octet. It is thus neither little-endian nor big-endian, though a compiler may use either convention if it implements 64-bit data and/or some way to pack multiple 8-bit or 16-bit values into a single 32-bit word. Analog Devices chose to avoid the issue by using a 32-bit char in their C compiler. |
I've worked on a system (using a TI DSP processor) where char, short, int and long were ALL 32 bits; the CPU only natively handled 32-bit chunks, nothing shorter. |
However, a char may or may not be a byte |
Do you explicitly mean char which could be different sizes on different architectures However, a char may or may not be a byte or byte which is actually defined as 8-bits in the dictionary? |
Standard wrote: |
---|
1.7.1 The fundamental storage unit in the C++ memory model is the byte. A byte is at least large enough to contain any member of the basic execution character set (2.3) and the eight-bit code units of the Unicode UTF-8 encoding form and is composed of a contiguous sequence of bits, the number of which is implementation defined. 5.3.3.1 The sizeof operator yields the number of bytes in the object representation of its operand. [...] sizeof(char), sizeof(signed char) and sizeof(unsigned char) are 1. |
byte which is actually defined as 8-bits in the dictionary? |
The byte (/ˈbaɪt/) is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. [...] The unit octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated at the time with the byte. |
I've worked on a system (using a TI DSP processor) where char, short, int and long were ALL 32 bits; the CPU only natively handled 32-bit chunks, nothing shorter. |
things like 0.5 bits that were tried in the past, but I honestly haven't seen them used since leaving school |