I've got a good one for you

I've got a good one for you

The following code works, unless I comment out line 9.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#include <iostream>
#include <cmath>

float root(float x,float min,float max){
    float mid=(max+min)/2;
    if (mid*mid==x || max==min){
        return mid;
    }else{
        std::cout <<min<<";"<<max<<std::endl;
        if (mid*mid>x)
            return root(x,min,mid);
        else
            return root(x,mid,max);
    }
}

int main(){
    std::cout <<root(2,0,2)<<std::endl;
    return 0;
}

Anyone?
Last edited on
Actually it seg faults either way, just takes longer if you have the print in there.

edit: btw, your values for min and max do eventually become equal to each other (and should thus return mid), but they're floats, so it's another case of that beautiful float comparison crap.
Last edited on
if (mid*mid==x || max==min)

Should be something like

if ( (((mid*mid)-x) < epsilon) || ((max-min) < epsilon) )
Last edited on
Compiling with MinGW, here.

Yes, I was well aware of the problems with floating point comparisons before writing that code, and I knew that it would produce an error. Anyone who has spent more than a few hours working with floats knows this.
However, that code did finish successfully after exactly 26 recursions when compiled and ran under Win32, and I have absolutely no idea why.

Here's the output (printing 10 decimal places):

0.0000000000 2.0000000000
1.0000000000 2.0000000000
1.0000000000 1.5000000000
1.2500000000 1.5000000000
1.3750000000 1.5000000000
1.3750000000 1.4375000000
1.4062500000 1.4375000000
1.4062500000 1.4218750000
1.4140625000 1.4218750000
1.4140625000 1.4179687500
1.4140625000 1.4160156250
1.4140625000 1.4150390625
1.4140625000 1.4145507813
1.4140625000 1.4143066406
1.4141845703 1.4143066406
1.4141845703 1.4142456055
1.4141845703 1.4142150879
1.4141998291 1.4142150879
1.4142074585 1.4142150879
1.4142112732 1.4142150879
1.4142131805 1.4142150879
1.4142131805 1.4142141342
1.4142131805 1.4142136574
1.4142134190 1.4142136574
1.4142135382 1.4142136574
1.414214
26

We can clearly see here that max and min are definitely not equal, and yet it finishes anyway.

Here's a link to the compiled exe and source if anyone wants to try it.
http://www.fileden.com/files/2008/6/22/1971683/test.7z
Last edited on
Well, I can't use your exe and source right now, but when I run it on X-Win32 and compile with g++ it seg faults. I won't be able to use you exe and source :-/
I don't have my stuff set up at home right now.
Running on Visual Studio - the crash (with or without the cout statement) is Stack Overflow.
I looked at the call window and started to count how many times the function recursed and gave up at 100 - and there was still reams to go - I reckon it had called it self at least 900 times before the crash.

As implied by previous posters - testing for equality with float types is very very hit and miss (mostly miss).
It seems MinGW is the only one screwing up the comparison, for whatever inscrutable reason that might be.
And the value of the recursion counter was 4322 for me, by the way.
The call stack windows seems to have given up after a while, since the oldest call is
float.exe!root(float x=2.0000000, float min=1.4142135, float max=1.4142137) Line 15 + 0x1a bytes
Last edited on
It has nothing to do with the compiler beyond where the floats are stored each iteration (maybe). Computer hardware will cause you more problems than choice of compiler or programming language.

Zaita already answered the question as to why it is failing.

http://en.wikipedia.org/wiki/Methods_of_computing_square_roots

Hope this helps.
My question was more why it is working, rather than why it is failing.
I already know why it shouldn't work. What I'd like to know is why it sometimes works.

I continued testing and found that if I print the value before the comparison, it causes infinite recursion. So, the "rule" seems to be: if, in the same call, there's a print after the comparison, then the recursion is finite, unless there's a print before the comparison, in which case the recursion is infinite.

Now, I already know that this is not the way it's supposed to be done. What I'm interested in is what on earth is causing this arbitrary rule. If anyone has any clue why this could be, I'd like to hear it.
Ah, sorry for the brainless response.

Would you believe that the answer is (drumroll):

-- It depends on where you put your statements --
Yes, you read that right.

The FPU (whether integrated with the CPU or not) has its own registers, much like the CPU registers you learn about in assembly programming. On the Intel x86 architecture they are ordered as a "stack" and they are all IEEE 80-bit double-extended-precision values.

On a PC, it is typical for the float to be an IEEE 32-bit single-precision value and a double to be an IEEE 64-bit double-precision value. This is not stipulated by the language, though, and so it is not guaranteed to be so by every PC compiler. On other computer architectures float and double are the appropriate FPU types for that machine. On really old PCs, they might even be emulated. :-@

NB: long float --> IEEE 48-bit single-extended-precision long double --> IEEE 80-bit double-extended-precision ...but only if the compiler agrees that those are valid types :-)

If you make a call to a function, the state of the FPU registers cannot be guaranteed/known after the function returns (at least not in C and C++), so the float operands are re-loaded from memory --where their precision is exactly identical, before comparing/multiplying/etc.

However, if you don't make a call to the function, the contents of the FPU registers are known based upon the last time they were used. Hence, you may be comparing a float (IEEE single-precision on x86 PCs) in main memory to an IEEE double-extended precision value still on the top of the FPU register stack. Hence, automatic errors.

Worse yet, even without the function calls, simply using the floats normally causes the precision to be converted back and forth and back and fourth... each time compounding precision errors.

It gets worse, but I'll leave it at that. :-D

Here are some good links:

David Goldberg's famous
What Every Computer Scientist Should Know About Floating-Point Arithmetic
http://docs.sun.com/source/806-3568/ncg_goldberg.html

Comparing floating point numbers
by Bruce Dawson
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm

Understanding and Using Floating Point Numbers
by Jeff Bezanson
http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point.html

Whew. Have fun now!
Last edited on
Now that's the kind of reply I was looking for.
Thank you very much!

<rhet>So this is what happens when you put programmers and engineers in the same room?</rhet>
Last edited on
Topic archived. No new replies allowed.