Hey Guys, its me again. i have written a Random class that wraps the mt19937_64 class to be used in a more simple way
Everything works as intented, with the exception of uniform_real_distribution.
i have the following 2 methods:
1 2 3 4 5 6 7 8 9 10 11
template <typename Real,
typename = typename std::enable_if<std::is_floating_point<Real>::value, Real>::type>
Real NextReal() {
auto range = std::numeric_limits<Real>();
return NextReal(range.lowest(), range.max());
}
template <typename Real,
typename = typename std::enable_if<std::is_floating_point<Real>::value, Real>::type>
Real NextReal(Real iMin, Real iMax)(Real iMin, Real iMax) {
return std::uniform_real_distribution<Real>(iMin, iMax)(_generator);
}
So if i now use it like this:
1 2 3
Random rng;
rng.NextReal<float>();
// Or rng.NextReal<float>(-FLT_MAX, FLT_MAX); which is what rng.NextReal is doing internally
I get an error window from my vs debugger, saying that the min and max values for uniform_real_distribution are invalid and points to this line of code in the header <limits> (Line 2605):
1 2
_STL_ASSERT(_Min0 <= _Max0 && (0 <= _Min0 || _Max0 <= _Min0 + (numeric_limits<_Ty>::max)()),
"invalid min and max arguments for uniform_real");
That confuses me because the only restriction i could find for min and max is that min should always be less than or equal to max, which should be the case here. Do i miss something here?
Requires that a ≤ b and b-a ≤ std::numeric_limits<RealType>::max()
1 2 3
b-a <= std::numeric_limits<RealType>::max()
// is equivalent to
b <= a + std::numeric_limits<RealType>::max()
The _STL_ASSERT has: 0 <= _Min0 || _Max0 <= _Min0 + (numeric_limits<_Ty>::max)()
IF 0 <= a THEN automatically b <= a + std::numeric_limits<RealType>::max()
The real restriction is thus the b-a <= std::numeric_limits<RealType>::max()
Ah, i overread the b-a <= std::numeric_limits<RealType>::max() part.
So i guess there is no simple solution to generate random floats between -FLT_MAX and FLT_MAX.
Gonna figure that out somehow though, thanks :)
What is it that you are trying to generate this for, anyway? I didn't work out the details, but something about using <float>::max() as the bounds of your distribution just seems kinda messy. Once you start heading towards large floating-point numbers, the gaps between adjacent representable numbers can become significant. The gaps between floating-point numbers themselves are not uniform. I assume that uniform_real_distribution normalizes for this somehow, but you will still always have those gaps. https://www.exploringbinary.com/the-spacing-of-binary-floating-point-numbers/
The gaps won't be that huge in terms of sig-figs, but they can be huge in terms of integers. e.g. the gap between two 32-bit floats in the magnitude of 90% float max (1038) is around 1031.
This is just me conjecturing possible issues. It may very well not be an issue for your purposes.
It does not, in my case. And my implementation does have the possibility to work out small ranges (like using 0f and 1f for percentages and such). So there is no real issue there.
This is just to be a somewhat simple mt19937 random class wrapper, because i dont like throwing long templates and typenames like "uniform_real_distributor<TYPE>" into my own code.
It is just bugging me that i have to do a workaround implementation for the default NextReal method, or change its behaviour to work with FLT_MIN and FLT_MAX instead of what i originally did.
Using double or long double will result in the same error message because of the b-a <= MAX check. and my Random::NextReal can also take double and long double as type parameter. i just used float as an example
So i guess there is no simple solution to generate random floats between -FLT_MAX and FLT_MAX.
You could generate random bit patterns until you hit a value that's not NaN. That seems easy enough. The only downside is it won't be uniform; values near zero will be more likely.
So i guess there is no simple solution to generate random floats between -FLT_MAX and FLT_MAX.
What about generating a random number between 0 and FLT_MAX. Then generate a int random number between say -10 and 10. If negative then negate the generated number.
@seeplus i thought about something like that, but that would only work for the default method and not for the method where you can pass the min and max range. because min is not always -max, so there might be a strong bias depending on what your min and max is.
@keskiverto that is actually pretty neat and would solve the problem i had with seeplus' approach. Because with that i could first calculate if the result will be positive or negative and then just calculate a positive number in the range i need and make it negative if i need.
It is really interesting tho, the hardest part in learning a programming language is often learning about what the std library has to offer and which external libraries could be of use for what you want to accomplish.
Thank you guys for your thoughts, i will mark this as solved when i finished my implementation and share my final code.
that would only work for the default method and not for the method where you can pass the min and max range. because min is not always -max, so there might be a strong bias depending on what your min and max is.
What do you mean by "bias"? If I select min=-1 and max=10 for a uniform generator, I expect positives to be 10 times more likely than negatives, and I do not want an equal proportion of positives and negatives. If I select min=1 and max=2, I don't want any negatives whatsoever.
If a generator does something different from this it's by definition non-uniform.
If I select min=-1 and max=10 for a uniform generator, I expect positives to be 10 times more likely than negatives, and I do not want an equal proportion of positives and negatives
Yes, that is indeed correct and how it currently works.
What do you mean by "bias"?
Imagine i want to generate a number between -FLT_MAX + 200 and FLT_MAX,
then i use seeplus' approach with the random int between -10 and 10, then i create bias because -10 and 10 dont have the same proportion as -FLT_MAX + 200 and FLT_MAX.
Thats where the bernoulli_distribution comes in handy.
@seeplus tbh its hard to know everything the std library has to offer.
Just a quick side question:
i have a method like this:
If you're selecting a uniform interval that's in the order of 2*FLT_MAX, you're selecting an interval where the average distance between representable points is, if I'm not mistaken, roughly 20 nonillion (2^(128 - 24); 7-bit exponent and 24-bit mantissa). A bias of 200 over 2*FLT_MAX is not merely irrelevant, but actually non-existent.
Are you trying to solve a real problem, or is this merely a learning exercise?
@helios both, this class is for learning, but it will be used. And yes, that bias is not really relevant, but that bias changes with the min and max you pass.
I dont know how bad it actually can be, but i think using bernoulli_distribution does fit more for the situation than using the -10 to 10 approach. Or do i overthink it?
For practical purposes, I've never seen anyone complain that generating [0; 1) and extrapolating that to the desired range generated an unreasonable amount of bias.
If you really want to handle min and max values that may be anywhere in [-FLT_MAX; FLT_MAX], I'd suggest just manipulating the float bits directly. A float is just mantissa*2^exponent. You can generate a mantissa in some range (it works pretty much the same as generating a normal integer) and set the exponent according to the parameters. The only tricky part is handling when the min and max are in entirely different magnitudes. E.g. min=-1e-9, max=1.1e+10
@helios i might try that in the future, but for now i will implement the bernoulli distribution, i want to finish the Random class so i can work on the next thing i need.