why memcpy inverts the byte order?

Hi all,

I'm trying to use memcpy to get some kind of information copied into a buffer. My problem is that the order of the copied bytes seems to be inverted from the original order. This is a simplified example of my code:

#include <stdio.h>
#include <string.h>

typedef unsigned char BITMAP;
typedef BITMAP DERIVED[4];

int main()
{
DERIVED var;
int i;

BITMAP *pbitmap = (BITMAP *)var;

unsigned int aux = 0x01001100;
printf("%08x\n", aux);

memcpy(pbitmap,&aux , 4);
for (i = 0; i<4; i++)
printf("%02x", pbitmap[i]);
printf("\n");

char *name = "mauricio";
char n[20];
memcpy(n, name, sizeof("mauricio"));
printf("%s", n);

return 0;

}

The output shows the following:

01001100 //original hex byte sequence
00110001 //inverted hex byte sequence
mauricio //original string in the right way

As it is shown, memcpy has no inversion effects over a typical string. I would be grateful if anybody could tell me what I'm not doing right and how to sove the problem.

Thanks in advance.


What you have doesn't represent the problem. You need to post the code giving you a problem.

However, I may be able to help you avoid that...

memcpy() does not modify byte order. However, by moving bytes into the wrong places you can change a word value.

For example, here are three 16-bit words of memory. Each word is composed of two bytes.

[03 A8] [91 54] [19 2B]
_____ _______

I have used hexadecimal values. If we assume a little-endian machine (like an 80x86), then the words are: 0xA803, 0x5491, 0x2B19.

Now if I move the bytes above the left line to the right line, then reading the second word gives you the incorrect answer (0x0391).

The problem with in-memory BITMAP structures is that the pixel data is very dependant on alignment. If you move things the wrong way, then the image data gets mangled.

Did I understand the question properly?
Ah, I love this kind of questions!
Endianness it is, then!

x86 is a [[little endian] CPU]. That is, it stores multi-byte values in memory with their least significant bytes (the little end) first, such that 0x12345678 is actually stored as 78 56 34 12. This sounds less intuitive than it actually is. Let's say this data was in a uchar array: uchar array[]={0x78,0x56,0x34,0x12}; Now, this may seem out of order, however, array[0]*256^0+...+array[n-1]*256^(n-1)=0x12345678.

Endianness is fortunately only relevant when accessing multi-byte values as individual bytes (for example, when copying a 32-bit int into a char array).
If it's absolutely impossible to avoid it, it's possible to detect the endianness of the CPU by casting pointers and copy the data into a platform-intependent format.
1
2
3
4
5
6
7
bool checkNativeEndianness(){
	ushort a=0x1234;
	if (*((uchar *)&a)==0x12)
		return BIG_ENDIAN;
	else
		return LITTLE_ENDIAN;
}


http://en.wikipedia.org/wiki/Endianness
Thanks to you both for your explanations!

I guessed it was a question about endianism. I'm dealing with buffers into the two main formats: big endian, because my application gets packets from the network, and little endian, because my system is x86 based. Now it gets clearer for me, but I'll have to be aware of it.

I'll be back... Sooner or later. Thanks again.
Topic archived. No new replies allowed.