Jul 2, 2022 at 12:58pm Jul 2, 2022 at 12:58pm UTC
I'm trying to encode data into a file in binary mode.
It should be 4 bytes with an integer value.
Say the value is 5, when opened in a hex editor the file should read:
0500 0000
= 5 (0005)
for 7592..
a81d 0000
= 7592 (1da8)
Last edited on Jul 2, 2022 at 1:04pm Jul 2, 2022 at 1:04pm UTC
Jul 2, 2022 at 8:03pm Jul 2, 2022 at 8:03pm UTC
Thanks guys for the solutions.
@seeplus
I'll take note of this, it worked ok.
@Duthomhas
I'm getting a no file or directory in #include <bit>
Last edited on Jul 2, 2022 at 8:03pm Jul 2, 2022 at 8:03pm UTC
Jul 2, 2022 at 8:38pm Jul 2, 2022 at 8:38pm UTC
What is your compiler, ruzip?
The <bit> header was introduced in C++20.
https://en.cppreference.com/w/cpp/header/bit
Your compiler is likely not 100% C++20 compliant, the only one currently that is fully C++20 compliant is Visual Studio (2019 & 2022).
Weirder and weirder, GCC and Clang are supposed to have
std::endian available, it's the first item listed in the C++20 library features.
https://en.cppreference.com/w/cpp/compiler_support/20
Maybe they have it in the
<numeric> header instead?
It is just a simple enum with VS.
enum class endian { little = 0, big = 1, native = little };
[ETA:] Try changing the include in the code Duthomas gave from <bit> to <type_traits>.
https://stackoverflow.com/a/38141476
Last edited on Jul 2, 2022 at 9:09pm Jul 2, 2022 at 9:09pm UTC
Jul 2, 2022 at 10:19pm Jul 2, 2022 at 10:19pm UTC
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
#include <fstream>
#include <iostream>
enum class endian { little, big, native = (*(char *)&1U)^1 };
std::ostream & write_word( std::ostream & outs, long long value, int nbytes, endian endianness )
{
if (endianness == endian::little)
{
while (nbytes --> 0)
{
outs.put( value );
value >>= 8;
}
}
else
{
while (nbytes --> 0)
{
outs.put( value >> (nbytes * 8) );
}
}
return outs;
}
I think I got the endianness trick right...
[edit]
The
endian::native
is really only useful to tell you what endianness the machine your
compiler ran on is...
Last edited on Jul 2, 2022 at 10:21pm Jul 2, 2022 at 10:21pm UTC
Jul 3, 2022 at 1:48am Jul 3, 2022 at 1:48am UTC
It's not usually necessary to test your machine's byte order, because it's usually possible to write code in a way that lets the compiler take care of the differences.
First decide the byte order of your data stream, ahead of time. Then, use the functions
insert_XX_YY from this forum thread:
https://cplusplus.com/forum/lounge/279954/2/#msg1211823
Here is one of the functions from that post:
1 2 3 4 5 6 7
inline constexpr void insert_u32_le(u8* p, u32 n) noexcept
{
p[0] = (n & 0x000000ff) >> 0;
p[1] = (n & 0x0000ff00) >> 8;
p[2] = (n & 0x00ff0000) >> 16;
p[3] = (n & 0xff000000) >> 24;
}
It always puts the four bytes of
n into the buffer pointed to by
p in little-endian byte order. There's also
extract_u32_le that takes
n back out again.
The interface of these functions are supposed to work best in cases where formatting and writing (or reading and parsing) are separate operations. But if you don't care about that, you can just modify the code:
1 2 3 4 5 6 7 8
std::ostream& insert_u32_le(std::ostream& s, u32 n)
{
s.put((n & 0x000000ff) >> 0);
s.put((n & 0x0000ff00) >> 8);
s.put((n & 0x00ff0000) >> 16);
s.put((n & 0xff000000) >> 24);
return s;
}
Last edited on Jul 3, 2022 at 5:44am Jul 3, 2022 at 5:44am UTC