Int - Float not casting correctly

I made this code to explain a about pointers for a Youtube tutorial and came upon some interesting results that I needed help explaining myself...
Why when casting between 2 pointer types does one pointer have a different memory than the other. Why when re-running the program the cast is performed differently again?

1
2
3
4
5
6
7
8
9
int a = 1234567890987654321; //Don't whine, just a demo number
int* b = &a;
float* c = (float*)&a;

std::cout << "a = " << a << std::endl;
std::cout << "b = " << b << std::endl;
std::cout << "*b = " << *b << std::endl;
std::cout << "c = " << c << std::endl;
std::coud << "*c = " << *c << std::end;


All below values are made up but are chosen to mimic what I saw in my results
a = 1234567890987654321
b = 0FB1290
*b = 1234567890987654321
c = 0FB1284
*c = 8.4237e-030

RERUN

a = 1234567890987654321
b = 0EBA242
*b = 1234567890987654321
c = 0EBA246
*c = 2.5918449e+049


Now I can see that it's obvious the reason the floats values are different is because they're reading from a different place to where they're actually stored (as int). But why's the casting doing this to the float pointers?

Thanks
When you tested this, did you use classes?
Looks like you're taking the actual memory address of a each time and casting it to a float, not the value that is stored in a.
Last edited on
This worked perfectly fine for me. Maybe it might help if you tried specifying your cast:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#include <iostream>
using namespace std;

int main() {
	int a = 1234;
	int* b = &a;
	float* c = reinterpret_cast<float*>(&a);
	
	std::cout <<  a << '\n';
	std::cout <<  b << '\n';
	std::cout << *b << '\n';
	std::cout <<  c << '\n';
	std::cout << *c << '\n';
	
	return 0;
}
1234
0xbfa5dcec
1234
0xbfa5dcec
1.7292e-42

http://ideone.com/Yj46CY
Well I expect that reinterpret cast should be what I expected in the first place since it's just a binary copy, but I wanted to know why implicit casting can "mess up" like this... I say mess up in air quotes because their may be a good reason for this that I don't know, and hence this is the answer what I'm looking for. Does anyone know why the implicit cast gave a different memory space than where the variable is actually stored?

booradley60 wrote:
Looks like you're taking the actual memory address of a each time and casting it to a float, not the value that is stored in a.

That's the idea, I know that the floating type will should read completely different to an int because of how it's binary value is used in it's numerical interpretation... My question was about the pointers and casting.

EDIT: I went to ideone and replaced your cast with an implicit one but it appears to work fine on there... Maybe something in how the compiler makes casting work?
I'm using MS-VSE12 for Windows 7 Desktop 64-bit, if anyone thinks that can help?
Last edited on
Topic archived. No new replies allowed.