Actually to me it looks like you're reading it as a signed 16 bit value to get -21436. |
Do you have an explanation for why that would be happening? I am not using any 16-bit stuff in my example.
Why are you reading in bytes instead of just reading the number? |
That's how one of the tutorials on the site (Disch's I think?) showed how to read in 32-bit little endian numbers. wav format happens to be little endian. If the file was in big endian format, I don't think that reinterpret_cast would work.
If I set up the byte array as an
uint8_t bytes[] = {68, 172, 0, 0} they both give the correct value, signed or unsigned.
I had a similar problem when reading 16-bit amplitude values that were supposed to be signed from a wav file, but when I changed it to uint16_t instead of int16_t, it started working correctly, but now everything is unsigned so I am relying on the behavior of how unsigned stuff rolls back to 0+ on overflow. The file parsing and reading/writing is confusing me, I still am not understanding what exactly the problem in my post is :/
Btw, is this the right way to write a 4-byte integer, whether signed or unsigned?
1 2 3 4
|
void write_s32(std::ostream& file, int32_t data)
{
file.write((char*)(&data), 4);
}
|