Int type questions

well...According to the tutorial or other online sources or things I already know, I would like to say something about them and wanna ask questions about the input of integer(decimal number) or the range in more accurate term to express them.

Well...char is guarantee to be signed - 128 to 127. The question is if we use char to store integer, does it mean that the max number we could use for the input is between the range of - 128 to 127 signed type? If the user, for instance, we have cin as for input, enter 130, this will generate an error?

Cuz I am kind of confused the range of signed to be something to something or other related stuff.

Besides, would you care about explaining a little bit about wchar_t? I will follow up with my instruction book about something I don't understand.
Actually, a char can really be anything >=1 byte (IIRC). Assuming the a char is one byte, then yes, that is the max range. If you type in something too big, you will get overflow...130 would = -2 in this case...but you would have to look at the binary to see it:

127: 01111111 (signed/unsigned)
-128: 11111111 (signed)
255: 11111111 (unsigned)
130: 10000010 (unsigned)
-2: 10000010 (signed)


This is because the first bit is generally used as the "signed bit."

If you type in something like 65535 (unsigned, binary: 1111111111111111) I am not sure how it deals with that...

About wchar_t: http://en.wikipedia.org/wiki/Wide_character
-128: 10000000b
11111111b: -1
-2: 11111110b
10000010b: -126

Remember the rule for sign inversion: -n==~n+1

When you try to put something too big into something of n bits, only the n least significant bits of the former are used. It gets tricky, however, if you try to put something into something bigger of n bits. Sign extension and all that. It can lead to some pretty nasty bugs.
Last edited on
Well...thanks a lot for all that. I know if yo do something stupid with signed type, you will get a nasty results, maybe something that is beyond your expectation but unsigned won't guarantee that, and you will surely get errors if you do that. So usually if we wanna store a large set of number, which type is the best for user input? Shall we use long signed type? But it will occupy a lot of memory.

Besides, when you guys refer to something "significant", I don't really understand what you guys mean by that?
On a 32-bit Windows machine, a long is 4 bytes long. 4 MiB (an insignificant amount, by today's standards) can hold 1,048,576 longs. With some exceptions, it's not worth wasting time thinking which type to use. Just use long or unsigned long always and use your energies on something more productive.
As for decimals, there are some choices. float will work for at most three decimal places without using too much memory, so it's ideal for business calculations. Regular calculators (not scientific) use something with the same precision as float. Then there's double, with is better for physics calculations. long double gives even more precision, but it's probably impractical, and I'm not sure if it's standard, either.

Consider the number 01101001b. 0110 are four most significant bits, or high nibble, and 1001 are the four least significant bits, or low nibble.
Last edited on
Awesome and now I got it. And now after the search of wiki, I understand these significant bits better but COMPUTER TERMS are driving you nuts.....I really don't like jargons.

Anyway, thanks so much!
Topic archived. No new replies allowed.