My whole point is that syntax matters to
humans — and we see that regularly.
It would have been very easy to simply make the default standards err when a narrowing conversion is done, and require something explicit to tell the compiler you meant to do that. Older programs that would otherwise break can be compiled without problem using a compiler flag.
Instead we introduce a new, weird syntax to try to fix the problem.
And, I contend, it is a problem with the way C++ people are conditioned to think about their programming languages.
I personally prefer languages with less arcane minutiae in syntax to do something simple.
The common problem with
num1 / num2
has nothing to do with the narrowing conversion (MSVC will warn about it unless you tell it to shut up), but the fact that
/
is an
integer division because its operands are integers. This fits with C++’s ability to overload functions based on argument type, but confuses the snot out of people in the case of the division operator: people expect floating-point division and don’t get it. Hence the plethora of threads everywhere about making sure at least one operation is non-integer to effect a floating-point division:
double x = (double)a / b;
That doesn't get the narrowing conversion warning/error simply because floating point division doesn’t return a different (larger) type than its operands like integer division does.
Many languages recognize that division is a particularly special case and provide a special operator to signify integer division is desired. For example, in Object Pascal I say:
x := a / b; // This is a floating-point division, regardless of operand type.
// If 'x' is real, the stored value is real. Else it is truncated to integer.
x := a div b; // This is an integer division. Operands must be integers.
// If 'x' is real, the stored value is promoted to real. Else it is integer.
That is straight-forward and instantly easy to grasp, something we cannot say about any C-derived language.
Furry Guy wrote: |
---|
Don't like the syntax? You are free to not use it. It is MY choice. |
No, the choice is foisted on everyone whether they like it or not. I can present code to neophytes, grandparents, PhDs in mathematics, ...
anyone, and they will nominally understand:
double x = a / b;
but be unduly confused by:
double x { a / b };
I’m not sure why you got so hot under the collar about this discussion though. C++ committee members obviously agree with you, not me.
But the fact that the language is at odds to what humans expect says something important, methinks.