trig library functions

Hi all,

I've been writing a program using the cmath.h library functions sin, cos, tan and was wondering why entering increasingly larger angles into these functions produces inaccurate results. Such as

sin ( (PI/180.0) * 1.000) = 1.745240643728351e-002 correct
sin ( (PI/180.0) * 179.0) = 1.745240643728344e-002 (Too low by 7e-17)
sin ( (PI/180.0) * 361.0) = 1.745240643728307e-002 (Too low by 46e-17)
sin ( (PI/180.0) * 539.0) = 1.745240643728279e-002 (Too low by 72e-17)

I realise that these are only small errors but I'm using formulae with very large angles and many trig functions so am concerned the errors might accumulate.

What would be the best approach?
to reduce all
sine angles to -90 to +90
cosine angles to 0 to +180
Tan to -90 to +90

Thanks
Last edited on
what did you #define your PI as? Also How are you storing these Variables?
floating point values are stored as a sign bit, a mantissa, and an exponent. The number of bits allocated to the mantissa is fixed. As the whole part of the number increases, it takes more bits to store it, which leaves fewer bits to store the fractional part. Less bits to store the fractional part means more roundoff error, which is what you are seeing.

The error you are seeing is caused by the multiplication of the angle by the conversion factor. Therefore, yes, if you were to reduce the angles using the various trigonometric identities, then you would reduce your error.
floating point values are stored as a sign bit, a mantissa, and an exponent.


There's no sign bit. The exponent is excess-n and the mantissa is 2's comp.

Does anyone know why they're not both 2's comp? What's the advantage of excess-n for the exponent?

EDIT: Wrong! See my correction below.
Last edited on
Thanks for your replies.

Can see exactly what you mean now J Smith. Was wondering if it was to do with the conversion to binary. So the larger the whole number part of a floating point value the less space is available for the fractional part. So what about values between 0 and 1? Would they all have the same amount of space available for storage and the same roundoff error?
Yes.

Here's a useful link:
http://babbage.cs.qc.edu/IEEE-754/Decimal.html

Type in a floating point number and you can see its IEEE representation in binary.
There's no sign bit. The exponent is excess-n and the mantissa is 2's comp.

Oops! jsmith was right, as usual. There is a separate sign bit. I don't know what I was thinking of.
Topic archived. No new replies allowed.