Precision

I've been given a problem in c to compute precision for a given reference value. Given this prototype...
double PrecisionVersusBase ( double base)

precision = base
while( base + precision > base)
precision = precision / 2
precision = precision * 2;
I was hoping for a better explanation of the what this prototype is doing.
Ok, why does it divide by 2 and then multiply by 2? You end up with the same number.

Or does it keep dividing until precision is equal to zero?

Either way, you gave yourself an infinite loop: the first way guarantees that the while will always be true when base is not zero, and the second way is a messy thing Calc students know as the limit: you can approach precision equaling zero as closely as you like, but base + precision will ALWAYS be greater than base the way you set it up.

What is the precision in this context? Is it the number of certain digits on the end of a float, like in physics? Or is it the tolerance range, like in engineering?

I'm sorry, but the infinite loop has nothing to do with precision in either case, and I still have no idea what you are trying to do. Post a bit more clearly (use the code button and proper syntax) and I might be able to help. But I can't help when I can't fathom what the problem is.

//Does this code do what it is supposed to do when you compile it?
The syntax that makes sense to me is this:
1
2
3
4
5
6
7
8
9
10
double PrevisionVersusBase(double base)
{
  double precision = base;
  while (base + precision > base)
  {
    precision /= 2;
  }
  precision * = 2;
  return ?;
}


In order for the loop to break, precision has to be zero or a floating point that can not be represented when added to base. In this way you get the precision of base (or a double in general). The > sign is confusing, why not !=. The last part is also confusing, you've figured out precision, why double it. Last I knew, precision what represented by +/-, not by 2x what it actually is.
Topic archived. No new replies allowed.