Hi, I have a data. i want to make 1000 packet of it, every packet has an id. Packet id is in hex and in two parts, hex_high & hex_low and need to store at fixed array location char ch[23] and char ch[24] respectively in form of 0xAB. (example).
1 2 3 4 5 6
char ch[45];
for(i<0;i<1000<i++)
{
ch[23]= 0x // should contain high hex, ex = 0xAB from 0xABCD
ch[24]= 0x // should contain low hex ex = 0xCD from 0xABCD
}
i tried, to make function of char hexToHigh(int) and char hexToLow(int)such that ch[23] = decToHex_Low(i) as below
1. If i provide "43981" it returns 'A' as a high nibble and 'D' as low nibble.
requirement is "AB" as high nibble and "CD" as low. but with 0x attached to it so that it can be stored in char as a hex.
2. My compiler is says ," no member for back/front"
3. If "return stm.str()", "error: cannot convert 'std::basic_stringstream<char>::__string_type {aka std::basic_string<char>}' to 'char' in return"
43981 is hex ABCD.
Two octets; high octet is hex AB (decimal 171), and low octet is hex CD (decimal 205).
Four nibbles; high nibble is hex A (decimal 10) and low nibble is hex D (decimal 13).
ch[23] can hold one byte (one char); what do you want to store in it?
The value 171? The character 'A'? Something else?
ch[24] can hold one byte (one char); what do you want to store in it?
The value 205? The character 'D'? Something else?
There is no such thing as HEX value or decimal value. Value is a value. Three apples is a three apples regardless if you sie binary, decimal, octal or anything else.
value 0xAB is undistinguishable from value 171. Both are represended in exact same bit pattern inside computer. Representation is a different thing. It is generally larger than number itself and require more memory. Hex representation 0xAB requires 4 bytes and cannot be stored in a single char.
This is why you should use unsigned chars for storing data: ab and cd are negative 8-bit numbers, so resulting int is negative too.
you can do: cout << hex << (int(ch[0]) & 0xFF) << endl;
when int is capable to store 43981
because it would not make sense to sent negative or value larger than max for 16 bit unsigned int. This way you can clearly see that function expects 16 bit number and returns high/low bit
I think you're confused about what "0x" does. There is no such thing as a char that holds a hex value (or octal value, or decimal value, or base 15 value). It just contains a value. When you print the value, you may choose to print it in hex, or octal, or base 18. The point is that it's just a value.
What you're doing is a common problem: the byte-order of the data in a packet may be different from the byte-order of data on a particular host. Most network data is stored in big-endian order and there are functions that convert to/from this order. You may be able to do this as simply as:
void storeId(short id, char *packet)
{
*(short*)(ch+23) = htons(id);
}
htons() is "host to network short" and converts a 16 bit value in the host byte order to network byte order (big endian). The assignment assigns this value to the two bytes at ch+23.
This code assumes that short is 16 bits and that you can store a 16 bit value at any address.