Give your opinion and practice

Which of the following do you agree to, and why?

Question 1: Signed or Unsigned?
A. Always make variables unsigned unless you need signed types
B. Always make variables signed unless you need unsigned types
C. Just write out types without caring whether they will be signed or unsigned unless you need signed or unsigned

Question 2: Pass Primitives by Reference or by Value?
A. Always pass primitive data types by value unless you need to pass by reference
B. Always pass primitive data types by reference

Question 3: Int or Long?
A. Always use 'long' rather than 'int'
B. Always use 'int' rather than 'long'
C. Always use 'unsigned' and 'signed' with no primitive type rather than 'int' or 'long'

Question 4: Specific names or Normal names?
A. Always use Normal primitive names such as 'char', 'short', 'int', 'long', 'long long'
B. Always use specific names (if your compiler supports them) such as '__int8', '__int16', '__int32', '__int64', '__int128'



Here are my answers and explanations:
1. A - I think it's a waste if you suddenly make half of a variable invalid values.
2. B - It helps me keep the habit for when I pass larger objects.
3. A - 'int' could be any number of bits. 'Usually 32 bits' doesn't cut it for me.
4. A - simply because I haven't gotten used to B and I'm not sure it's very important unless you're worried about 32bit -> 64bit portability.

I am curious to see what you have to say and what has what downsides and/or upsides. :)
Last edited on
1. I use whatever the API uses. If there's no API (or I'm the one making the API) I use whatever makes the most logical sense. If it's for something quick and stupid like a loop counter, I usually just go with int because it's easy and familiar.

2. A. passing primitives by const reference can actually yield worse performance. Although in template functions I opt for const T& even if T will usually be a primitive. And passing by non-const reference when you're not changing the object is a bad idea anyway.

3. 99% of the time I go with int. They often work out to the same size anyway so it's moot. If I need a specific size I go with a typedef. Though I do say "unsigned" instead of "unsigned int".

4. Never anything compiler or platform specific unless it's wrapped in a typedef. int, char, unsigned, float, double as needed. For other types (long long, short) I usually put them in a typedef.
D - It depends heavily on what the variable will hold. I can't say "always", but I also can't say "whatever". Unsigned types tend to be more useful in general, though.
A
D - int for maximum portable performance (generally the size of a register), long if I really need those extra digits (although I will usually use one of the ones below, in that case).
A - Avoid extensions if at all possible. Many compilers support [u]int#_t and C++0x will add them. Some libraries, including Boost, define their own specifically sized types.
Last edited on
closed account (3hM2Nwbp)
1) Whichever makes most sense (but I always have the signedness specified). (A)

2) A

3) It depends on the expected value range.

4) A - it annoys me to no end that virtually every library (and compiler) adds their own typedefs for the primitive types.
Last edited on
1. A
2. A/B (I'm inconsistent)
3. Unless I need a larger data type I use int since I hold a secret belief that a 32-bit CPU works fastest with 32-bits.
4. A/B (I use int when I don't care about the size, I use char when I want ASCII, but otherwise I use typedefs (I usually redefine {ui,i}nt{8,16,32,64}_t to {u,i}{8,16,32,64} because I don't like that uint64_t is more letters than int64_t and I don't like the _t suffix either).
Topic archived. No new replies allowed.