It's not in the standard, so you need to provide it (unless your compiler gives you it as an extension). It was, however, what was causing your error message at the second attempt.
You don't need #define Pi 3.14 (and that's not very accurate anyway). Just replace with constdouble Pi = 3.14159265358979; http://www.piday.org/million/
windows calculator gives a decent # of digits
3.1415926535897932384626433832795
which I think exceeds a normal double's capacity. This is probably the 80 bit floating point value, if you want to go that far with it.
pow for small integer powers is also clunky. Better to just say r*r*r
also, its <cmath> for c++, math.h is a C header.
macros can do a small # of things you can't do anywhere else, eg microsoft at least provides some debug macros like "what function am I in right now" and so on. For the most part, macros are best avoided as error prone and difficult to debug; they can also cause type safety problems.
pow for small integer powers is also clunky. Better to just say r*r*r
But the argument might not be an integer.
More importantly, if you must write a macro, always always ALWAYS enclose the definition in parentheses and enclose any use of the argument in parentheses too:
pow does too much work, its relatively slow because its doing a convoluted process for potentially fractional *powers* (the base does not matter). When the power is an integer, like the very common squares and cubes in so many formulae, it is usually better to just multiply it yourself. If you need to compute r to the 3.76th power, use pow. If you need r cubed, cube it yourself. I think pow can also generate minor rounding problems in a few cases.
There is nothing 'wrong' with what you did, just something to be aware of as pow can become an annoying time-waster deep in math code inner loops, and it exposes another issue with macros, performance tuning is hard when you can't see the function doing the slowness because its masked...
> Is the concern here that r might be an expression with side effects, such as a function call?
That is one of the concerns.
The use of unparenthesised macro arguments as part of a bigger expression can lead to general insanity even when there are no side effects.
#include <cmath>
#include <iostream>
#define PP_MATH_PI double(3.1415926535897932384626433)
#define PP_VOLUME_BAD(r) (4.0/3) * PP_MATH_PI * std::pow( (r), 3.0 )
#define PP_VOLUME_WORSE(r) ( (4.0/3) * PP_MATH_PI * (r) * (r) * (r) )
#define PP_VOLUME_WORST(r) ( (4.0/3) * PP_MATH_PI * r*r*r )
int main()
{
int i ;
// bad because it is a macro
i = 5 ;
std::cout << PP_VOLUME_BAD( --i ) << '\n' ;
// worse because it engenders undefined/unspecified behaviour (macro argument has side effects)
// the saving grace is that atleat in in this case, the programmer would know that there
// is a serious problem (UB); the compiler can and would generate a warning
i = 5 ;
std::cout << PP_VOLUME_WORSE( --i ) << '\n' ;
// worst because it gives absurd (unintuitive, unintended) results (even though there are no side effects);
// and the compiler is helpless: it can't warn us (it hasn't even seen the macro).
i = 3 ;
std::cout << PP_VOLUME_WORST( i+1 ) << '\n' ;
}
No, the function calls are picked up by your performance tuner by name.
the macros are expanded so it tells you that pow is the problem, but finding where it was called might be a challenge. Could be some profilers are smarter now, but last one I used could not name the macro behind the issue.