therefore you should create typedefs to avoid the problem(int32,int16, etc..) |
What problem?
If you know there is an actual problem; that is, you absolutely positively must have an integer type large enough to handle a given range of integers, and you have reason to believe that the target compiler/hardware combination will not provide that with a simple "int", and you really want to make it very clear to everyone who ever reads the code that you need 64 bits for a reason, then sure, maybe you do need to use an int64.
There are some mathematical instances where I could imagine it becoming helpful too; in programming, I'd expect to see them mostly in graphical calculations. Again, that is a specific case where this is a useful technique.
But otherwise, why? As a general rule of thumb, the problems that occur through different sized ints are actually just exposing bugs in people's code. For example, they didn't think about the range of values that needed to be stored, and they got lucky on one machine because the int type was large enough, and on another machine with a smaller int there is an overflow; the problem here is that they wrote buggy code and didn't think about the datatypes they were using. I can see the argument that forcing the coder to pick a specific size int automatically forces them to think about whether it's the correct size; the problem being fixed here is bad programmers! That said, bad programmers is far and away the most common problem, so I suppose fixing it isn't a bad thing.
On embedded hardware, it does become more important, but when writing for embedded hardware it is much more important to know the exact details and limits of the hardware.