Say you had a program which wrote a binary file, and in that file it wrote an integer, a null terminated string, then a double.
To read that file, you would have to write it so that the first sizeof(int) bytes were read into an int, then parse single bytes as characters until you reached a 0 (the null terminator), and then read in sizeof(double) bytes in to a double variable. That's what I mean when I say reverse engineer the program that wrote the file. Writing programs that read/write binary data can be problematic, in that data sizes for various types vary on different systems and implementations. That is, the actual size of an int can vary widely depending on which system and which compiler you use. Also, whether numeric types are big-endian or little-endian can be different.
I'm aware that most students will have to do binary file I/O for the purposes of exercises and assignments, but in most real systems, if data can be reasonably represented as characters, then you're probably much better off using character data in file I/O.
Nobody can really tell you how to read that .dat file, as .dat is not a standardized format. If you look at standardized formats for anything, you'll usually find they're mind-numbingly pedantic about what all the bits & bytes mean. Take a look at this spec for .png files.
http://www.w3.org/TR/PNG/ You might get 4 pages in before all the bit diagrams and tables make you really consider coughing up hard cash for a decent library rather than writing all that bullshit yourself :)