1) Define a symbolic constant for the maximum number of points
2) Ask the user for the values of min and max;
3) Using the three values above(SIZE, min and max) compute the values for x;
4) Use Equation 1 to compute the values of f(x);
5) Display a table with the following format:
So those are the requirements. what im having problem with is with the user input for min and max. the program i did does get the output that is shown as an example but thats only if the user enter -2 as min and 2 as max. any other range for example -3 and 3 would go beyond the range for the output. The constant(SIZE) which is 21 (the total number of values) does work only for the example, which i think is the problem here. I dont know if im going about it the right way or not. Any thoughts?
2.)You just declared that count would equal 0. Now you are multiplying by count or 0,
then you add userMin. So x is effectively userMin right from the start.
double x = inc*count + userMin;
So when you code
x = userMin + inc*count;
You are actually making x = userMin + userMin + inc*count
A little more explanation as to what the intent of this project is would be appreciated.
wow sorry about that, its for an array, the only instructions stated are the ones mentioned at the top and this one below also f(x)= is the formula 2 * x*x*x + 4 * x + 5
instructions: evaluate an equation f(x) over a range of x values (xmin, xmax) and make a table like the one shown below, <--- this is the sample output i posted b) The user will enter the values for xmin and xmax. everything besides what was stated in the instructions i did on my own to match the sample of the output
this is the code with the array in it that i left out
1 2 3 4 5 6 7 8 9 10 11 12 13
while (x <= userMax)
{
for (int e = 0; e < SIZE; e++)
{
arr[e] = x;
arr2[e] = 2 * x*x*x + 4 * x + 5;
cout << fixed << setw(15) << setprecision(2) << arr[e]
<< setw(15) << setprecision(2) << arr2[e] << endl;
count++;
x = userMin + inc*count;
}
}
2.)You just declared that count would equal 0. Now you are multiplying by count or 0,
then you add userMin. So x is effectively userMin right from the start.
it increments count so it would be for example
x=-2+0.2*0; this would =-2
x=-2+0.2*1; this would =-1.8
and so on until it reaches 2
Also, the number of increments between any two numbers is going to change.
For .2 inc between -2 and 2, there are 20 increments.
For .2 inc between -4 and 4, there are 40 increments.
Do you see the problem here? If you try to print out all increments between -4 and 4 but are limited to printing out only 20 of them you are going to be missing half of them.
yea i figured that out but didnt know how to go about in solving that issue. the constant for the total number of values is what im guessing is part of it
so i solved the issue with the constant. i just keep it 21 and switch the variable inc to float inc =(userMax-userMin)/SIZE; BUT now i get a new problem with the output. if userMin is -2 and userMax is 2, it's suppose to add 0.2 to -2 and so on until it reaches 2. It starts doing that up until it reaches -1.0, then it goes to -0.9, -0.7.... etc, It also goes beyond 2. Any ideas?