Which kind of seems of a mixture between big and little endian format, with the alpha values ignored?
Anyone can tell me why this is? Do I really have to use this strange format in my macros when compiling for windows? Will this always work? (pixels are directly plotted by using memcpy&memcmp)
This is typical. Windows typically stores it as ARGB, rather than RGBA. And output video is typically 24-bit RGB (padded to 32 bit) and not 32-bit ARGB anyway... which is why there's no alpha.
Will this always work?
Probably not. That's the whole reason why SDL probes the video mode and gives you this stuff programmatically. Just get the mask/etc from the current video mode SDL provides you and work with it -- rather than assuming the video is a certain format. That way you know it will always work.
(But why are you messing with individual pixels outside of a shader anyway? This is 2015, not 1998)