Hey all. Uber beginner here. I'm curious as to what indicates what data type one should be using when writing a program. I understand that each data type (bool,int,short,long etc) refer to the size required in bytes to what you need but how do you even know which one you need? I see that int is used very often (and I know that boolean is just a true or false type). Any info or elaboration on this topic would be great! Thanks!
If you're dealing with most numbers, you might use "int" (which is the same as "long"). If you're using with very large numbers, you might use "long long". If you're dealing with rather small numbers, a short might be best.
If you're dealing with only POSITIVE numbers you might be better off declaring it as "unsigned". e.g. "unsigned int" or even just "unsigned", which is short for "unsigned int". "unsigned long" and "unsigned short" also work. These prevent them from being negative (they just wrap around to the maximum value if you try to subtract from them when they're 0). This also has the effect of doubling the positive range available (because the range that would previously be used for negative can now be used to define larger positive numbers). By default, integers are signed, not unsigned.
A bool is for true and false as you know.
char is the equivalent of a 1-byte integer, and is used to store ASCII characters usually (which can be represented by, go figure, 1 byte.)
Basically everything else is some variation or extension of these basic data types.
As Alok points out, the standard leaves that up to the implementation.
For gcc, the default is signed, but you can modify that with -funsigned-char. You can also explicitly ask for signed characters with -fsigned-char.
On MSVC, the default is signed but you can modify that with /J.
Andy
PS Unless I am concerned about memory space, or need to deal with very big values, for routine coding I use int and unsigned int for integers, and double for floating point (to keep things simple!). Plus bool, char, char*, etc. The other types turn up thanks to external API declarations.
Note that unsigned char is most commonly used to handle raw data (at least in Windows world, where I spend the most of my time, where BYTE is a typedef of unsigned char).
Thanks very much for the addition, that was informative. Yeah I wasn't positive about that, I think of it as unsigned, but it doesn't surprise me that it was incorrect. I dropped out my comment about it being unsigned by default.