Check this out:
https://en.wikipedia.org/wiki/IEEE_754-1985
There is a pretty good example here.
Let's take another example:
-0.00527 in decimal can also be represented as
-1 * 527 * 10-3.
Now we have some interesting parts here:
-1 is the sign. It's always either -1 or 1, so this takes 1 bit to represent.
527 is the number itself. We could choose to represent it with a finite number of bits.
-3 is an exponent. We could also choose to represent it with a specified number of bits.
The 10 is an arbitrary base which humans use to make things easy to read. 10
9 is easier to read than 1000000000. Floating point numbers use base-2. Therefore we could re-write our question.
Let's take
-1010000b.
We have a negative number, so let's set the sign bit to 1.
We have
101b as our number.
We have 4 trailing 0s, which is
100b for an exponent.
If we defined our own convention, we might choose to make an 8 bit number. The first bit is the sign, then three bits for the exponent, then four bits for the number. Packing all of this togeather, we would have:
1 100 0101b or
11000101b.
The IEEE-745-1985 defines how many bits are used for the exponent and how many are used for the number.
1 bit for the sign, 8 bits for the exponent, 23 bits for the number.