Return value of atof is creating confusion.

Aug 6, 2008 at 11:17am
Hi,
When I am using atof method the return value gets manipulated.
I am not able to find out the reason, why this is happening
and how to resolve it.

Below is my piece of code with three different results, for three different float values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#include <iostream>
#include <string>
using namespace std;

int main()
{
        float a = 0.0f;
        string b = "20.99998f";
///     string b = "20.786f";
///     string b = "20.786456f";

        printf("The String value : %s\n",b.c_str());

        a = atof(b.c_str());

        printf("Value after atof : %f\n",a);

        char temp_ptr[100];
        memset(temp_ptr,0,100);

        sprintf(temp_ptr,"%f",a);

        string str="NULL";
        str =(char*)temp_ptr;

        printf("Value after ftoa : %s\n",str.c_str());

        return 0;
}


Case I For
string b = "20.786456f";
Result
The String value : 20.786456f
Value after atof : 20.786455
Value after ftoa : 20.786455

Case II For
string b = "20.786f";
Result
The String value : 20.786f
Value after atof : 20.785999
Value after ftoa : 20.785999

Case III For
string b = "20.99998f";
Result
The String value : 20.99998f
Value after atof : 20.999981
Value after ftoa : 20.999981


N.B : My compiler version is gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5)

Can anyone please let me know exactly how memory is being allocated when a datatype is float.

Thanking U in Advance,
Diptendu
Last edited on Aug 6, 2008 at 11:17am
Aug 6, 2008 at 11:46am
This happens because the number of significant (decimal) digits in IEEE 754 single (aka "float) is about 7. Note that the first 7 non-zero digits of your numbers are correct (or become correct if you "round to nearest"). Why is this? Well, a single has a 23 bit significant (aka mantissa) (plus one implicit first bit that is always one), the rest is sign and exponent. So you basically have 23bit for the "exactness" of your number, in other words, for a given exponent you can have 8388608 different values (i.e. 7 correct decimal digits, neglecting that implicit first bit ).

N.B : My compiler version is gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5)

I am not completely sure, but I think there is a bug in that version, so it does not obey the standard IEEE 754 (it uses double rounding, meaning it computs floats with 80bit accurracy (performing 80-bit-rounding) and then rounds the result to 32 bit single). This is a common performance tweak (your fpu uses 80 bit internally anyways), but I think in that given compiler version it was not disabled with optimization turned off. So if you have other trouble with floating point arithmetic, try a different compiler (there are newer gcc versions...) and use -O2 as a maximal optimization, see also -funsafe-math-optimizations.

If you need "a little bit" better results, use "double"s, they have a 52bit significant.
If you want to do exact computations, have a look at libgmpxx, the Gnu Multiple Percision Library (C++ wrapper classes). They can perform exact rational arithmetic. If that isn't enough, have a look at LEDA::RAEL, which is exact for real algebraic numbers over Q (if that isn't enough, simplily your formulas or do it by hand ;-) )
You also could use floating point expansions. See e.g. Knuth's "Seminumerical Algorithms"-book.

Edit: Err, Knuth doesn't write about expansions, he just shows the formula to add *two* numbers exactly, using the machine instructions. Expansions were discovered later, I think. "Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates" by Shewchuk might be the paper you want.
Last edited on Aug 6, 2008 at 12:10pm
Topic archived. No new replies allowed.