signed and unsigned numbers represented as bits

Hi, I'm reading B.Stroustrups book Programming Principles and Practice Using C++ and I'm on page 961. I'll quote some text from the book
If the sign bit is 1 the number is negative. Almost universally, the two complement representations is used. To save paper, we consider how we would represent signed numbers in a 4-bit integer:

Positive:
0000 ( 0) 0001 ( 1) 0010 ( 2) 0100 ( 4) 0111 ( 7)

Negative:
1111 (-1) 1110 (-2) 1101 (-3) 1011 (-5) 1000 (-8)

The bit pattern for -(x+1) can be described as the complement of the bits in x (also known as ~x)


I understand what ~x means (we are changing all 0's to 1's and 1's to 0's) but what I don't understand in this table is the decimal values
So I understand that the leftmost bit is used to represent a sign in a signed integer for example 0001 is 1.
But if we change leftmost bit to 1 to try to represent -1 how come we doesn't get 1001 as representation of -1 but instead we get 1111 (what I would consider being -(minus) (2^0 + 2^1 + 2^2) = -7)

Could anyone explain to me how do we get for example -1 from 1111 or -2 from 1110 in an signed 4 bit integer?
Last edited on
THis is definition of two's complement:
-x = ~x + 1

- 0001b = ~0001b + 1 = 1110b + 1 = 1111b
Backward:

-1111b = ~1111b + 1 = 0000b + 1 = 0001b
So -1111b is 1, therefore 1111b is -1
Tnx. What you wrote makes sense but what if I wanted to calculate decimal number from a given signed binary number like 0001 = 2^0 = 1.
How can I do it with 1111 to get -1. If leftmost bit represents minus sign as I told before I only get -7 doing the calculation. that makes me think that equation -x = ~x + 1 cant be right. Problem is that its probably right

1 more thing - if leftmost bit means integer sign than I can figure out without using this equation how that can be that 1001 is not -1 but 1111 is -1 instead
Last edited on
Two's complement has wraparound propertly:

-1 + 1 = 0 → 1111b + 1 = 0000 (highest bit lost)

To calculate -x you can take 0, add 1 before highest bit and deduct x from it:

-1 = 0000b - 0001b = 10000b - 0001b = 1111b
So there is basically no such elegant way to calculate decimal number from negative binary number as there was when calculating decimal number from positive binary number?
If for example I give you 8 bit signed negative number 10010111 can you quickly calculate decimal value of it on paper like you could easily done with unsigned numbers? (2^7 + 2^4 + 2^2 + 2^1 + 2^0). You cant do this stuff anymore if the number is negative right?
Convert to positive nimber, calculate its value and slap - before it.
Tnx I think I finally understand it :) I suppose I have to use -x = ~x + 1. Looks like it works :)
Topic archived. No new replies allowed.