I have been reading a bit about pointers casting on google but I find a lot of articles about the syntax and how to safely cast classes and objects. But I can't seem to find what I need so I posted this here.
I have some sample code from someone shown below. I have skipped the testing and validation stuff but if more info is needed, let me know.
1 2 3 4 5 6 7 8 9 10
|
WORD BinRec[22]
FILE *f
...
f = fopen(filename, "rb")
...
fread(&BinRec,sizeof(WORD),4, f)
...
fieldval = (double)BinRec[i]*30/60000
...
|
I am trying to understand the last statement... In the fread statement, &BinRec, a 16bit pointer stores the memory address of the data. When it converts to a double (64bit?), what does it actually do? Does it simply take the unsigned int and add a couple zeros at the end or ?
The actual aim of trying this out is to replicate the parsing code in vba. However, vba only allows signed 16bit or 32bit data. Thus, I am trying to read the unsigned 16bit integer<c++> as a signed integer<vba> and convert it into an signed long<vba> containing the actual value of the unsigned integer<c++>. I am doing this using the formula:
value of unsigned 16bits int = signed 16bits int + 32768
However, since I am not getting "right" answers from my parse code in vba, I suspect that I either
- misunderstood the last line shown above in c++ or
- have used an inaccurate conversion formula between unsigned int to signed int.
I am not really familiar with programming, not to mention binary stuff and concepts. Thus, this is probably simple stuff or I might have some conceptual mistakes in my overall understanding, which would be good if you can point it out to me. Thanks!!