JLBorges wrote: |
---|
The former is ("as-if") evaluated at run-time; the latter is evaluated at compile-time and is a constant expression. |
Ok, I
thought understood that, but I didn't realise it
produced different results :+(
And it was wrong of me to suggest that might have been part of the problem.
However what about this part?
TheIdeasMan wrote: |
---|
You had __int64 when It should have been double on line 8 (your code), the current numbers work with an integer type and the casts but what if they change and you have a fraction? |
Further, removing the casts to double would produce integer division. I know you weren't proposing that, but others were.
1 2
|
// with g++ / g++-compatible implementations, compile with
// -std=c++14 -pedantic errors to force conformance with standard C++
|
I compile with this these days, qmake adds a bunch of other stuff too:
clang++ -std=c++14 -Wall -W -Wextra -pedantic-errors -Wswitch-default -Wswitch-enum -Wunused -Wfloat-equal -Wconversion -Wzero-as-null-pointer-constant -Weffc++ |
// note: the compile-time evaluation of a floating point expressions may yield a result
// that is different from the evaluation of the same operations on the same values at run-time |
.
Well, that's a bit of a worry :+| How does that come about? Noted that you say
may , but I wonder how much of a difference? Would extending the type to
long double
, multi-precision or exact decimal make any difference? If a fix is needed that is. It may not be so bad if a type change fixes it, or if one of
const
or
constexpr
proves to be better than the other.
I have some code that has a heap of
constexpr
in the initialisations of crucial values: for some of them up to 13sf or possibly more are required to be accurate. If it's going to be an issue I might have to do some analysis to see just how precise these values have to be. Probably good to know anyway. The first few of these initial values go on to participate in dozens of other expressions. So far testing with
one set of values seems to have worked, but I am not around to any serious & thorough testing yet. I have to investigate the best way of doing that.
IIRC, the reason why I used
constexpr
was that I had just read an article by
Herb Sutter saying to use
auto
,
constexpr
and brace initialisation as much as possible.
If you have
lots of spare time, here is the link to the manual:
http://www.icsm.gov.au/gda/gdatm/gdav2.3.pdf
I have worked on Chapter 4, page 20 of the pdf (pg15, the document) Vincenty Inverse.
Probably only have to look at that one page to see the sort of things that go on :+)
One quick example is the Lat & Long values. They are expressed to 5dp of 1 arc-second (<1mm in position), in radians that is 4.848e-11. probably want 3 or 4 more dp if one is to do calcs with those, so that 15sf.
Co-ordinates (ECEF Cartesian and Plane) are expressed to the millimetre with 10sf
Any way it's late my end, Thanks again for all your help.