Concurrent programming is a very complex subject. Not all programs can be effectively parallelized. In your case, it would seem that all the extra code OpenMP is adding is measurably more expensive than the code itself, which likely points to a blatant misuse of the directives. At the very worst, you should break even. If it's slower, it's because either you're doing something wrong, or your computer doesn't have more than one core, which means threading doesn't offer any performance advantage, which is itself doing it wrong, because properly written code is supposed to check for that.
I am new but I know I have a quad core ... not so newbie ... The parallelized program is to make factorial and another program to make power. I can post but I think it's the same everybody use.
I don't know what "parallel for reduction" does, but neither power() nor fattoriale() are parallelizable. A computation can only be parallelized if some of its steps are independent. For example, matrix multiplication can be easily parallelized because the value of each element is independent from the other elements in the result matrix.
Your implementation of factorial is not parallelizable because each step requires the result of the previous step. If didn't make much sense to make it parallel, anyway. Using 32-bit ints, the result variable is overflowed after only a few iterations.
power() is unparallelizable for the same reasons.