How can the computer tell the difference between the signed and unsigned numbers... since they all look the same. For example:
* if it is a unsigned number 1000b(inary)=8d(ecimal)..
* if it is a signed number: 1000b=-8b ...
They all look the same in memory so how can the computer tell the difference between those two numbers... My first guess was the it was the CPU's Sign Flag that made the difference between those two numbers.. but after a long search .. that theory when down..
It seems the sign flag is only used in instructions.. and not in testing each number.. which now seems logical.
My second guess was that the operating system when allocating these numbers in memory at runtime, knows how to interpret each number by it's location in memory or something, and knows how to read that address...meaning the bits at that address(in which order).
So what is the answer ... because I asked lot's of teachers at college and no one answered me yet.. I searched the web and asked on the Internet a lot and I still don't what exactly how it works...
Also how does the computer know if it's a int or a float ? ... obviously my guess is that the compiler can tell because you're using int or float declarations... and you can manipulate the way the computer can acces memory ... by type casting. So my second question is what is type casting..by that I mean how does the compiler alocate allocate memory ( at a lowlevel) and then how does it recognize that memory in runtime ?
example.. :
int a=5;
If you print the a variable you would see 5 on the console .. but why doesn't show rubbish? Somehow at runtime the program know where to acces the memory and how. It knows it's a signed integer.
It doesn't. You tell it how to interpret it by the types of the variables you use.
1 2 3 4 5 6
int x = 5;
char c = 'A';
double p = 3.14;
constchar* s = "Hello World";
std::cout << x << c << p << s;
all just works because the compiler knows that x is an int, c is a char, p is a double, and s is a C-string
by the declarations of those variables.
To be slightly wrong, but simplistic to illustrate the point, std::cout has several functions all named
operator<<, each of which takes a different type of variable. Each of those functions knows how to
interpret the bits since it knows the type.
It is possible to implement a form of dynamic typing, but this is through static type casts based, as you thought, on a flag somewhere, or by the dynamic binding central to OO.