sizeof(3.14)

Apr 28, 2011 at 7:02pm
1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream.h>
#include <conio.h>
int main()
{
	float	pi__float_ver  = 3.14;
	double	pi__double_ver = 3.14;

	cout <<	sizeof (pi__float_ver)	<< " bytes" << endl;
	cout <<	sizeof (pi__double_ver)	<< " bytes" << endl;
	cout <<	sizeof (3.14)		<< " bytes" << endl;

	getch();
}


4 bytes
8 bytes
8 bytes



I have stored the value of Pi in 2 different variables, one as float and the other as double. The code displays the sizes as 4 bytes and 8 bytes for the float and double ones respectively.

The question is when I try to display the size of 3.14, a decimal number (in the 3rd cout statement)(which I suppose is a floating point number) it displays a size of 8 bytes instead of 4 bytes.

What does this mean? Does C++ treats a decimal number as double by default?

Pls help, Thanks In Advance :)
Apr 28, 2011 at 7:10pm
http://www.cplusplus.com/doc/tutorial/variables/

sizeof() returns the size of the variable. The output is correct because double variables are of size 8 bytes while floats are 4 bytes. It has nothing to do with the current value of the variable.

http://www.cplusplus.com/forum/general/28539/
Apr 28, 2011 at 7:10pm
You can use a special suffix to make a literal decimal value a float instead. Otherwise the literal is considered a double.
1
2
3
float f = 3.14f; // suffix not necessarily needed in this case
sizeof(3.14f); // will equal sizeof(float)
sizeof(3.14); // will equal sizeof(double) 


Literals: http://cpp.comsci.us/etymology/literals.html
Last edited on Apr 28, 2011 at 7:20pm
Apr 28, 2011 at 7:14pm
Ignore my previous comment, I misunderstood the question.
Apr 28, 2011 at 7:55pm
OK No probs GodPyro, Thanx for replying!

Hi Branflakes91093, Thanks for the reply.

Yeah, I knew about the suffixes and typecasting, but what I wanted to confirm is that "by default C++ treats decimal values as double instead of float".
Apr 28, 2011 at 8:27pm
unless you have a 'f' suffix, then yes, C++ will treat it as a double.
Apr 28, 2011 at 8:44pm
Thanks Disch.
Apr 28, 2011 at 8:47pm
Is there any specific reason why the type double is the default ?
Apr 28, 2011 at 9:53pm
Because float's suck...?

No, probably for the same reason that
sizeof(10) == sizeof(int) && sizeof(10) != sizeof(short)
Apr 28, 2011 at 9:53pm
Why shouldn't it be? float with its 32 bits is horribly inaccurate and hardly suitable for generic use.
float is used mainly in 3D programming, as there, size is far more important than precision.
Topic archived. No new replies allowed.