Portable Code - Type Size problem

Hey,

As far as i know, types (e.g int) might have a different size on different plattforms, therefore you should create typedefs to avoid the problem(int32,int16, etc..)

But what problems may actually occur if I just use int for every plattform and the int has a different size?

Aren't there also problems with C++11 'auto' and typedefed int(or any other typedefed built-in type)? If I have to use my typedef integer type I should also avoid to used auto i = 10 Right? (because auto would make it a build-int int?)
I could only use auto i = int32{10}

Im a game developer and I have to work with game engines, where all of them typedefed the built-in types, but I'm not aware of the issuse that may occur if I don't use them.
Last edited on
As far as i know, types (e.g int) might have a different size on different plattforms
Yes.
therefore you should create typedefs to avoid the problem(int32,int16, etc..)
Why?

Aren't there also problems with C++11 'auto' and typedefed int(or any other typedefed built-in type)?
You can be surprised by the choice, but if you care about the type, declare the type you want. Surely, int32_t i {10}; is clearer than auto i = int32{10};

auto wasn't introduced to save you from saying int. It was meant to deal with the complicated temporary types that lambdas and so on can generate.

Use the features appropriately and you won't get into trouble.

Trivial uses of auto leads to unmaintainable code. Just think, you pick some guys code who just left the company, and it's full of auto and lambdas everywhere. You are asked to fix a bug, work why the code is slow, or add a feature; where do you begin? You probably have to run the thing in a debugger to see the types it's using. Does that sound right to you?
Last edited on
therefore you should create typedefs to avoid the problem(int32,int16, etc..)


What problem?

If you know there is an actual problem; that is, you absolutely positively must have an integer type large enough to handle a given range of integers, and you have reason to believe that the target compiler/hardware combination will not provide that with a simple "int", and you really want to make it very clear to everyone who ever reads the code that you need 64 bits for a reason, then sure, maybe you do need to use an int64.

There are some mathematical instances where I could imagine it becoming helpful too; in programming, I'd expect to see them mostly in graphical calculations. Again, that is a specific case where this is a useful technique.

But otherwise, why? As a general rule of thumb, the problems that occur through different sized ints are actually just exposing bugs in people's code. For example, they didn't think about the range of values that needed to be stored, and they got lucky on one machine because the int type was large enough, and on another machine with a smaller int there is an overflow; the problem here is that they wrote buggy code and didn't think about the datatypes they were using. I can see the argument that forcing the coder to pick a specific size int automatically forces them to think about whether it's the correct size; the problem being fixed here is bad programmers! That said, bad programmers is far and away the most common problem, so I suppose fixing it isn't a bad thing.

On embedded hardware, it does become more important, but when writing for embedded hardware it is much more important to know the exact details and limits of the hardware.
Last edited on
@kbw

auto wasn't introduced to save you from saying int. It was meant to deal with the complicated temporary types that lambdas and so on can generate.

exactly.

But still, I would use auto for nearly(there are some problems for few types) any type
for consistency reasons.


Trivial uses of auto leads to unmaintainable code. Just think, you pick some guys code who just left the company, and it's full of auto and lambdas everywhere. You are asked to fix a bug, work why the code is slow, or add a feature; where do you begin? You probably have to run the thing in a debugger to see the types it's using. Does that sound right to you?


1
2
int32_t i {10}; 
auto i = int32{10};


they are nearly always equivalent, but again, for consistency and other reasons for C++ 11/14 I'm writing the 'new' style.

Trivial uses of auto leads to unmaintainable code. Just think, you pick some guys code who just left the company, and it's full of auto and lambdas everywhere. You are asked to fix a bug, work why the code is slow, or add a feature; where do you begin? You probably have to run the thing in a debugger to see the types it's using. Does that sound right to you?


This isn’t actually true, as the example above, you can be explicit about your type and still use auto. And with an IDE you can easily see the type, by hovering or whatever..

http://herbsutter.com/2013/08/12/gotw-94-solution-aaa-style-almost-always-auto/


Ok, so its clear, using just auto without a explicit type for plattform independent code isn't a good idea.
(maybe with user defined literals? but i'm not sure how they exactly work)




Last edited on
I find code with this sort of thing in:

auto x = checkDataValues();

I have no idea from reading this what type x is.

The solution to this is not to hope that an IDE can tell me. That's fixing the symptoms; not the problem.
Last edited on
@Moschops
thank you for the answer about the type size problem
Last edited on
> what problems may actually occur if I just use int for every plattform and the int has a different size?

As programmers, we should be primarily concerned with the properties of a type, not its size.
Let the implementation take care of the details.

1
2
3
auto i = 123'456'789'098'765 ; // a sufficiently large integer type which can hold the value (long long on most platforms)

decltype(999'999'999'999) j = 0 ;  // a sufficiently large integer type which can hold values with 12 decimal digits.  

There is nothing fundamentally wrong with letting the compiler deduce the type. And, as CppCoreGuidelines observes,
When concepts become available, we can (and should) be more specific about the type we are deducing:

1
2
// ...
ForwardIterator p = algo(x, y, z);
Topic archived. No new replies allowed.