why float has such a strange behaviour?

HI!
There is a piece of code which takes 'n' float or double numbers and calculates their average and then adds or subtracts to numbers so all of them reach to the average .
The reason I'm writing this code is for Programming Challenges (Challenge "The Trip" , page 17)
this is the simplified code :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
//NOT COMPLETE

#include <iostream>
#include <iomanip>

using std :: cout ;
using std :: cin ;
using std :: endl ;
using std :: setprecision ;
using std :: fixed ;

int main ()
{
    int n ;
    float average , taken , given ;
    cin >> n ;
    while (n > 0)
    {
        average = 0 ;
        taken = 0 ;
        given = 0 ;
        float exs [n] ;
        for (int i = 0 ; i < n ; i ++)
            cin >> exs [i] ;
        for (int i = 0 ; i < n ; i ++)
            average += exs [i] ;
        average /= n ;
        for (int i = 0 ; i < n ; i ++)
            if (exs [i] - average > 0)
                taken += exs [i] - average ;
            else
                given -= exs [i] - average ;
        cout << fixed << setprecision (2) << (given > taken ? given : taken) << endl ;
        cin >> n ;
    }
    return 0 ;
}


The reason I'm saying float has a strange behavior is this special case : (the same example the book gave)
we want this program for 4 numbers :
15.01
15.00
3.01
3.00

When I debug the program (in CodeBlocks GCC) and I watch how the values "average" and "exs [i]" changes during debugging.
And what I see is :
exs [0] = 15.0100002 (But I entered 15.01)
exs [1] = 15
exs [2] = 3.00999999 (But I entered 3.01)
exs [3] = 3

So the sum I get becomes 36.0200005 (Instead of 36.02)
and the average equals to 9.00500011 (Instead of 9.005)

And the the result I get becomes 12.00 (Instead of 11.99)

And then All the calculations go wrong.
Isn't there a way to fix it?
I'm not looking for another algorithms (for example getting the decimal and integers seperately) I just need a way to make the float calculate correctly.
THANK YOU ALL!
http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html
http://en.wikipedia.org/wiki/Floating_point

A floating point value is stored as a binary value with limited precision. Binary numbers are discrete, so one cannot store every possible real number exactly. Floats are not strange; they are deeply strange and do anything but what the math teacher told in school about math.
a float like any other type with floating point capabilities stores the closest possible representation of the given value, you have to consider this as a limit of the binary floating point representation, that's why it's happening.

try to write down on paper a binary representation for a given floating point value and will reach the same conclusion of your compiler/program, it's simply not possible to represent ALL the floating point values in binary, even when you are considering just a predefined range of values ( like with float ).
Last edited on
I think it's our familiarity with the decimal system which causes false expectations - though it shouldn't.

For example, we know that certain values such as 1/3 or 1/7 cannot be represented exactly in decimal notation, instead we have an unending stream of digits such as 0.333333333333... or 0.142857142857...

But other values can be represented exactly, such as 1/10 = 0.1 or 1/1000 = 0.001
It is these types of values which raise false expectations, as in a binary representation, they give the same issues as 1/3 does in decimal.

The only values which work well in both systems are powers of 2. such as 1/2 or 1/8 which can be represented exactly in both decimal and binary.

Useful online converter - try it:
http://www.mathsisfun.com/binary-decimal-hexadecimal-converter.html
Last edited on
Topic archived. No new replies allowed.