I was wondering what standard programming conventions for basic integer data types were. For example, I have an entity class that contains a positive integer for an ID -- and knowing that I won't ever have more than 255 entities at one time, I didn't know if it was proper programming practice to keep the ID as an unsigned char (0-255). Should I go ahead and extend it to an unsigned short? Or should I even make it a plain old int? I know there are advantages to having signed integers, but I was just curious to see if it was good practice to save memory by using the smallest data type that I know will fit the data. If it makes a difference, I am learning to code for a handheld platform with very limited space. Thanks in advance for the info!!
It's not really a straight forward answer. As a general rule, especially with limited space use the smallest data type that fits the data range. However compilers may optimise (or not), the choice out of your hands. If you use an unsigned char for example, just one byte, some compilers can't handle that so they will allocate a full 4 byte word and only use one byte of it.
Sometimes creating structs that adhere to 4 byte word boundaries will give you more control of memory allocation, so for example, instead of using four separate char variables, create a struct with four chars in it, that will guarantee only 4 bytes are used.