Where is the big endian?

So it looks like the majority of systems are using little endian byte order these days (x86_64, Mac M1, ARM, Risc v).

Does anyone have a story about where they have actually dealt with big endian byte order? What type of situations is it most likely to pop up in real life?
Last edited on
Motorola (68000/68020 etc) used to use big endian byte order. I loved those processors. Ah, the good old days...
https://en.wikipedia.org/wiki/Motorola_68000_series

I think big endian was used on old mac computers.

I haven't actually dealt with big endian computers myself but I always do my best to write my code so that it should work regardless. When writing binary files I usually choose little endian and make sure to swap the byte order when writing/reading on big endian computers to not end up with two incompatible formats. Then I cross my fingers and hope it's correct because I'm unable to test it ;)
Last edited on
a lot of older unix machines use it. Some hardware uses it.

My first job, about a week into it.. a co-worker hands me 20 cds of binary files in big endian and a program my predecessor wrote to pull it off. It was taking half a day per disk with his program, I had it all extracted almost as fast as the disk could be read with mine and got off to a good start at that job :)

Mostly these days I see it from hardware. Recently dealt with a sensor system that uses it, and a second one that used either depending on which computer was talking to it when the file was captured (and had to figure it out, it didn't even tell you in the file which way it was being used, as they assumed the file would be used on the system that captured it!).

3-d heightmaps use both, you never know what it will be with those.
Last edited on
A lot of older supercomputers were big-endian (CDC, Cray).
I cut my teeth working on them back in the day.
I remember the first time I saw little-endian (a DEC machine), I thought to myself why would anybody do that?
Today, many of these machines are bi-endian. i.e. they will operate in either mode.
Some operate with an object code flag indicating which endian the object file is, others have a firmware switch allowing the entire machine to operate in in one endian or the other.
Last edited on
ARM can actually work in either big or little endian mode, IIRC, although I think almost everyone uses it little endian, and conventionally network endianness is big.
Last edited on
What type of situations is it most likely to pop up in real life?
The desktop computer (little-endian) sends a uint16_t to a microcontroller (big-endian). The code in the MCU has to swap the two bytes before using the value.

Ah, the good old days...
New MCUs derived from the Motorola 68000 family are still in use today!

Also, some older game formats use BE, for whatever reason.
https://fallout.fandom.com/wiki/FRM_file
Last edited on
I thought to myself why would anybody do that?


I don't think it matters anymore, but at one time, I can't help but think it was some sort of efficiency in the circuitry, real or imagined, that led to it.

I know that if I did my own big int class now, today, I would probably use 32 bit int 'digits'** and go from container[0] as the low order digit to container[n] as the high order digit. But I don't see how that matters in circuitry, its just easier in c++ with our containers and memory layouts to go reversed & append a digit to the high end if it got bigger or chop it off if it got smaller.

**32 bit digits let you multiply them together cleanly into a 64 bit result without any hoops, keeping it simple
But I don't see how that matters in circuitry
It doesn't. Unless it has a cache, you can design the exact same processor with the same component layout and cross the wires right at the external interface of the memory controller to invert the endianness, whatever it is initially. The ALU works on bits, it doesn't care what order the data is stored in in memory.

its just easier in c++ with our containers and memory layouts to go reversed & append a digit to the high end if it got bigger or chop it off if it got smaller.
To me little endian has always made the most sense mathematically, since it means that number = memory[0]*base^0 + memory[1]*base^1 + ... + memory[n - 1]*base^(n - 1). 1-based indexing is evil for this same reason.
I think big endian was used on old mac computers.


They used Motorola processors. Macs used Motorola from 1984 - 1992. These were dropped by Apple as Motorola couldn't deliver the required performance at the time.

PS The Apple Lisa also used Motorola from 1983.
Last edited on
> big endian byte order? What type of situations is it most likely to pop up in real life?

Network byte order (the order in which octets are transmitted over the wire) in the internet protocol suite is big-endian.
Topic archived. No new replies allowed.