If I have a uniformly distributed random number N from 0 to 2^62-1, but is converted to double R=N/(2^62), and don't have access to the original N. Now I want a uniformly distributed random number from 0 to 2^52-1, so I use M=(long)(R*(1^52)). Ideally if the original N happens once for each number from 0 to 2^62-1, I should get M happens 1024 times for each number from 0 to 2^52-1. Unfortunately, double has only 53 bits of precision, and some largest N will convert to exactly R=1. This will cause the odd distribution of M that most numbers from 0 to 2^52-1 appear 1024 times each, but a small number of them (must be 256) appear 1023 times each, and the number 2^52 happens 256 times (due to R=1). I don't want M=2^52 at all. If I use M=(long)(R*(2^52-1)), I will get M distributed from 1 to 2^52-1, but 2^52-1 will appear only 256 times, while rest of the numbers will appear 1023, 1024, or 1025 times. Is there any way given R to get uniformly distributed integers from 1 to 2^52-1?
Well, you could get a uniformly-distributed integer from 0 to 2^52-1 by taking each of the 52 binary digits as a Bernoulli trial (each one being 0 or 1 with equal probability).
However, that's crazy. Why don't you just use the uniform_int_distribution tools in <random>?
Alternatively, if you do have a random number uniform on 0 to 2^62-1 then just take the lowest 52 bits of it (e.g. by using %).
If you do not have access to the original N then you are SOL. Sorry.
The problem is that IEEE numbers are not uniformly encoded, and that conversion step, outside your control, introduces a bias that you cannot fix with a simple equation. You need to have a special function that "undoes" the conversion by mapping the normalized IEEE value back into a uniform range.
I am certain that code exists to do this, but it has been ages since I have seen it and couldn't even guess a good google search for that kind of thing these days.
Remember that not all processors use IEEE floating point types, which must be a consideration you need to have as well.