Mixed Endinanness?

Does the C++ standard allow for different data types to have different endianness? For example, would an implementation with big endian integers and little endian longs be strictly conforming?
That's an interesting idea... but what would be the point?

Having an integral type that goes against system endianness would do nothing but destroy performance.

You could simulate it in a class easily enough. I guess it wouldn't be so easy since you'd probably have to inline assembly to have the same effect as the compiler doing it for built in types... and it probably wouldn't be as efficient.
Last edited on
The point is that if it is allowed, I have to account for it since it could exist.

Whether or not it "goes against the system endianness" isn't the point here, for all we now the system itself could have mixed-endianness and work just as efficiently either way.

I don't need to simulate it in a class, I just need to know what I have to account for. If I have to send a long over the network in a different way from how I send an int over the network, that's important to know.
I don't need to simulate it in a class, I just need to know what I have to account for. If I have to send a long over the network in a different way from how I send an int over the network, that's important to know.


Why would they require different handling? Use an endian independent method to marshal your bytes (e.g. bit shifting.)
> For example, would an implementation with big endian integers and little endian longs be strictly conforming?

Theoretically, yes.

But an implementation with big-endian int and little-endian unsigned int is not allowed.


> f I have to send a long over the network in a different way from how I send an int over the network

I'd say, send both as plain text. Send everything as plain text.
If data is large, and network bandwidth is at a premium, as compressed plain text.
JLBorges wrote:
I'd say, send both as plain text. Send everything as plain text.
If data is large, and network bandwidth is at a premium, as compressed plain text.
That's not an ideology I've heard before, in fact I've been taught exactly the opposite. Is this just to avoid the endianness issue, or are there other reasons too?

@cire: good point, it's all too easy to want to write whole bytes at a time. Wait...
I'd say if endianness matters to a program at all, it has a bug - even binary serialization should deal with values, not snapshots of your process memory.
(but I, too, prefer plain text communication over binary almost always)
I guess it's a personal opinion then? HTTP horrifies me.
> HTTP horrifies me.

HTTP is not plain text; it is marked up hypertext, suitable for visual rendering by a user agent.

As a program-to-program communication format, it has two advantages over plain text. Web servers (Apache, IIS) can do all the heavy lifting for us - handle network transport in a scalable way, provide network-security, authentication, robust error recovery and logging support. And http messages can tunnel through standard firewalls.


> I guess it's a personal opinion then?

I guess you could call it that. The motivating factors to favour plain text - favouring modularity, loose coupling and encapsulation, transparency, extensibility - are personal opinions that are quite widely held.
http://www.faqs.org/docs/artu/ch05s01.html
Topic archived. No new replies allowed.