Question about pointer casting (Conceptual)

I have been reading a bit about pointers casting on google but I find a lot of articles about the syntax and how to safely cast classes and objects. But I can't seem to find what I need so I posted this here.

I have some sample code from someone shown below. I have skipped the testing and validation stuff but if more info is needed, let me know.

1
2
3
4
5
6
7
8
9
10
 
WORD BinRec[22]
FILE *f
...
f = fopen(filename, "rb")
...
fread(&BinRec,sizeof(WORD),4, f)
...
fieldval = (double)BinRec[i]*30/60000
...


I am trying to understand the last statement... In the fread statement, &BinRec, a 16bit pointer stores the memory address of the data. When it converts to a double (64bit?), what does it actually do? Does it simply take the unsigned int and add a couple zeros at the end or ?

The actual aim of trying this out is to replicate the parsing code in vba. However, vba only allows signed 16bit or 32bit data. Thus, I am trying to read the unsigned 16bit integer<c++> as a signed integer<vba> and convert it into an signed long<vba> containing the actual value of the unsigned integer<c++>. I am doing this using the formula:

value of unsigned 16bits int = signed 16bits int + 32768

However, since I am not getting "right" answers from my parse code in vba, I suspect that I either
- misunderstood the last line shown above in c++ or
- have used an inaccurate conversion formula between unsigned int to signed int.

I am not really familiar with programming, not to mention binary stuff and concepts. Thus, this is probably simple stuff or I might have some conceptual mistakes in my overall understanding, which would be good if you can point it out to me. Thanks!!

Last edited on
However, since I am not getting "right" answers from my parse code in vba, I suspect that I either
[snip]
- have used an inaccurate conversion formula between unsigned int to signed int.


That's your problem.

In 2's compliment (ie: what virtually everything uses), there is no difference between unsigned and signed numbers below n-1 bits of width. IE, with a 16 bit number, 15 bit numbers are represented the same for both signed and unsigned. For signed numbers... the high bit has a "negative weight" and therefore determines the sign of the number... but for unsigned numbers, the high bit has no sign.

Here's an example with 8-bit numbers:

1
2
3
4
5
6
7
8
9
10
                    dec
bin           signed   unsigned
00000000        0         0
00000001        1         1
00000010        2         2
  ....
01111111       127       127
10000000      -128       128
10000001      -127       129
10000010      -126       130
  ...
11111111       -1        255


Bits have a weight of 2^n (where n is the bit number), so bit 0 has a weight of 2^0 (1), bit 1 has a weight of 2^1 (2), etc. The total represented number is the sum of all weights whose bit is set:

1
2
bits:
76543210
01001011  <--- 75 (dec)


In this example, bits 6, 3, 1, and 0 are set, therefore the represented number is:

2^6 + 2^3 + 2^1 + 2^0 =
64 + 8 + 2 + 1 = 75


With signed numbers, you follow the exact same pattern, only the high bit is negative (ie, with 8 bits, bit 7 would have a weight of -128, not +128):

1
2
76543210
10001000  <---  -120 (signed)
                 136 (unsigned)


signed: -128 + 8 = -120
unsigned: 128 + 8 = 136


---------------------
EDIT:

For 16-bit numbers, the pattern is the same, only bit 15 carries the sign rather than bit 7... so numbers 0-32767 are represented the same way for both signed and unsigned.
Last edited on
Topic archived. No new replies allowed.