Floating-Point Question.

Dec 31, 2014 at 4:21am
The following is taken from one of my textbooks:

C++ requirements for significant digits amount to float being at
least 32 bits, double being at least 48 bits and certainly no smaller than float, and long double being at least as big as double.


and the following is an example section of a float.h header file:

// the following are the minimum number of significant digits
#define DBL_DIG 15 // double
#define FLT_DIG 6 // float
#define LDBL_DIG 18 // long double


I do not understand this. I believe I understand what a significant digit is. If your floating-point number is 847,000 then there are 3 significant digits (847). But what I don't understand is how a floating-point type has a minimum number of significant digits. If float has its minimum number of significant digits set to 6, then how could it every present a number like 1.5, where there is only two significant digits. What am I missing here?
Dec 31, 2014 at 4:46am
Significant digits are like:
precision - significant digits - significant decimals
5           12.345               12.34500
4           12.34                12.3450
3           12.3                 12.345
2           12                   12.34
1           10                   12.3
Dec 31, 2014 at 5:54am
The incomplete answer (in reverse) is:

For single precision, the mantissa is 23 bits:
http://en.wikipedia.org/wiki/Single-precision_floating-point_format

2**23 = 8388608 = 8.388608 * 10**6

So for single precision, the 23 binary bits is at least 6 decimal digits.

For double precision, the matissa is 52 bits:
http://en.wikipedia.org/wiki/Double-precision_floating-point_format

2**52=4.5 * 10**15

So for double precision, the 52 binary bits is at least 15 decimal digits.
Last edited on Dec 31, 2014 at 5:58am
Topic archived. No new replies allowed.