2s Compliment

here is my situation.
I am pretty sure 2s compliment is used to store values in computes, and specifically in C++.
and that ranges for C++ data types are like -(n+1) to n.
for example a signed char would be -128 to 127.

The professors(university of Waterloo) teaching my cs C++ course disagree with me, and think that C++ uses a signed bit for signed numbers and have ranges like -n to n.
for example a signed char is -127 to 127

Am I missing something or am I actually right, and the professors are wrong?
Last edited on
C++ has nothing to do with it. It's all determined by the underlying architecture.

On x86 systems (ie: modern PCs), it's 2's complement for integer types, so you're correct that signed char is -128 to 127.

For floating point types it's completely different.

This can be easily tested on any machine:

1
2
3
// on x86 systems:
std::cout << int( char(0x7F) );  // will output "127"
std::cout << int( char(0x80) );  // will output "-128" 


I don't know why your professor would disagree unless you're misunderstanding him. Maybe he's thinking about floating points and not integral types. Or maybe he just doesn't know what he's talking about (he wouldn't be the first professor without a clue)
You're both wrong. The type of signed arithmetic used is determined by the CPU, not the language.

EDIT: Damn.
Last edited on
Yes I understand it is a low lvl processor thing, but in general no computers use sign bits, right?
and in general when talking about a 1 byte signed integer you have -128 to 127, right?
and while maybe some really old computers used signed bits and 1 byte signed integers that are -127 to 127 you would never encounter any like that under normal circumstances?

and I a sure he was not talking about floats, I am starting to think he knows very little about computers.

you mention x86, I assume it is the same with amd64 (if that is the right way of referring to the 64 bit processors). That is basically just the word size changing that a programmer would have to worry about, right?
Last edited on
but in general no computers use sign bits, right?


For floating point types: virtually all of them do (at least those that support floating point types)
For integer types: some do, but not ones that are commonly used.

really old computers used signed bits and 1 byte signed integers that are -127 to 127


It's not about old vs. new. Even many old processors use 2's complement.

you would never encounter any like that under normal circumstances?


It depends what architecture you're targetting. SUch architectures do exist, they're just not widely used.
"In general", and "normal circumstances" are both meaningless. Either you know the underlying CPU or you don't. If you do, you can make assumptions about the type of integer arithmetic it uses. If you don't, then you can't.
Not even the meaning of "byte" can be known without knowing the CPU, although the language does have methods for finding out.
Thanks for the help everyone, I understand everything a lot more now.

"Not even the meaning of "byte" can be known without knowing the CPU, although the language does have methods for finding out."

I have never heard that before, I have taken a lot of courses that have mentioned and taught about low lvl computer stuff and bytes and I have never heard a definition other then a byte is 8 bits (it is surprising how much they leave out).
Well, the difference between byte and octet doesn't really come up a lot other than in communications. I don't know when the last CPU with a non-octet byte was built, but I'm guessing it's been a long time.
It is not defined in C++ standard that how signed types are represented at the bit level. Compiler is free to decide how it will represent signed types. We have guaranteed that an 8-bit signed type will hold at least the values from -127 through 127; some other implementations allow values from -128 through 127.

So both are right in my point of view.

Regards
Compiler is free to decide how it will represent signed types.


While this is technically true, any compiler that doesn't use the machine's natural architecture for signed types is retarded and shouldn't be used by anyone.

On x86 machines, you can be confident you'll have 2's compliment, with the range -128 to 127.
Technically we can say 8 bit signed type can hold value from -128 to 127 and -127 to 127 is guaranteed, usually -128 to 127 you will get.

Regards
Right -128 is not guaranteed by C++ because C++ does not guarantee 2's complement.

On 2's compliment architectures though, -128 pretty much is guaranteed because it's what the underlying architecture uses.


Making assumptions about the underlying architecture can cripple portability, so in a sense you're right that it's "safer" to not assume that you can have -128.

On the other hand pretty much every major architecture out there uses 2's complement, so that assumption is reasonably safe to make. If you're targetting some obscure machine that uses something else, you'll probably have bigger things to worry about than how signed integers are represented.
@Disch

Yeap, totally agree with you.

Thanx
Topic archived. No new replies allowed.