Setting lmits for float

Jan 14, 2019 at 3:07pm
A simple one.... but what is an efficient way to set min and max limits you want for float, double, int etc. and check such on update.

1
2
3
4
5
6
7
8
9
10
11
12
13
float min = 0;
float max = 1;

float someValue = 0.5;
void addToSomeValue(float increment)
{
	someValue += increment;
	if (someValue > max)
		someValue = max;
	else if(someValue < min)
		someValue = min;
}


This code works but seems like there should be some better way.....
Last edited on Jan 14, 2019 at 3:08pm
Jan 14, 2019 at 3:09pm
Jan 14, 2019 at 3:18pm
Thanks lastchance, and sorry didn't search enough, should have found that myself.
Jan 14, 2019 at 3:44pm
To be honest, @Grunalin, your solution is fine ... but you could put it in a function to clip ... or clamp ... the solution between limits. std::clamp(), as in @JLBorges' post in that link, will do the job if you have a c++17-compliant compiler.

FWIW, I tend to use max-min, e.g.
max( low, min( high, value ) )
simply because that is how it tends to be written in scientific papers, not computer code.

(Since max and min are defined in the standard library in <algorithm> you may want to choose some alternative names of variables.)
Jan 14, 2019 at 3:54pm
I got loads to read through to decide the approach i want. I will be calling this function a lot and the way I wrote seems quite inefficent. Will read up/ test each way i guess. Thx again guys.
Jan 14, 2019 at 5:20pm
There's really not much else you can do [outside of external refactoring]. max() and min() functions are still going to be calling if-statements. Using clamp instead of min()+max() just saves you 1x if-branch for best-case scenarios.

One thing one could do, depending on the logic of the program, would be some sort of lazy evaluation, where you don't clamp values until necessary.

"seems quite inefficient"
I would do actual, repeatable performance testing before you make this conclusion. It could very well be the process/subroutine as a whole that's slow and not just this one bit of it.

If, after doing real testing and not just guessing, you determine that this part of your code is the bottleneck, look into other "preprocessing" you can do the data.
Interesting read: https://stackoverflow.com/questions/11227809/why-is-it-faster-to-process-a-sorted-array-than-an-unsorted-array
Last edited on Jan 14, 2019 at 5:34pm
Topic archived. No new replies allowed.