#include <chrono>
#include <cmath>
#include <iomanip>
#include <iostream>
#include <limits>
#include <random>
// tune the iterations number according to your pc performances: -------------//
using mytype = int; // int, long, long long... <---//
constexpr mytype Maxval { //
std::numeric_limits<mytype>::max() / 100 // <---//
};
int main()
{
std::mt19937 gen {
static_cast<unsigned>(
std::chrono::high_resolution_clock::now().time_since_epoch().count()
)
};
std::uniform_real_distribution<> dst (
std::numeric_limits<double>::lowest(),
std::numeric_limits<double>::max()
);
for(mytype m {}; m < Maxval; ++m) {
auto d { dst(gen) };
if ( d != std::numeric_limits<double>::infinity() ) {
std::cout << "Generated number: " << std::fixed << d << '\n';
}
}
std::cout << '\n';
}
What I get (after a fairly long time) is... nothing!
It seems, with those parameters, the std::uniform_real_distribution only generates 'inf', while I thought it would have generated a variety of doubles (13.666, -12548795364.5248, ...).
What’s wrong in my code?
Wow, thank you, Peter87, that’s very tricky.
So, since std::numeric_limits<double>::lowest() is negative, max() - lowest() turns out to be an incredibly large positive number...
I admit that I had read that page, but in my mind I was just doing the ‘simple’ (=wrong) math and calculating max() - lowest() as 0!
Thanks again.
May I ask your opinion about this? Doesn’t it sound like a little bug?
Shouldn’t it be better if it were b - |a| < max()? Or would it be there another downside?
No, I don't think it's a bug, but I agree that it's "tricky" and easy to make a mistake.
b-a is the size of the range. I don't know how uniform_real_distribution is normally implemented but I'm guessing that b-a > max() makes it harder to implement, or at least in an efficient manner, because you wouldn't be able to calculate b-a without getting infinity.