In MATLAB, you could define a function and evaluate it at a point x by feval(@func,x(i)); is there something similar in C++? |
Yes, the equivalent would be func(x(i)) and the function definition would be:
1 2 3 4
|
double func(double x)
{
return cos(x)/(x*x);
}
|
1) Is there any way to get the length of a vector/array? |
Well, there are two different things in C++: regular arrays have a fixed length, which needs to be known at compile-time and vectors, which can have a variable size and can be resized. vectors return their size when you call size() on them.
Example:
1 2 3 4
|
#include <vector>
[...]
vector<double> h = { 0.001, 0.01, 0.1, 1 }; //requires C++11
//h.size() would return 4
|
Currently I'm just defining the length as N = sizeof(h)/sizeof(h[0]), however, I read online somewhere that there is an issue with this definition and can lead to problems later on. |
It does not work when you only have a pointer to the beginning of the array (which is always the case for dynamically allocated arrays).
Is there a better way to define the vector length? |
Well, for defining things in general you should use constants:
1 2 3 4 5
|
const int N=4;
[...]
double fp1 [N];
double fp2 [N];
//etc.
|
3) Finally, I'm outputting my results into a text file, but I'm forced to use a for loop which is probably not the most efficient method. Is there a better way to handle this? |
Normally you'd put all those values into a class (fp1, fp2 etc.), which would reduce the number of arrays to one.
You can then give the class an operator<< for stream output, which at least reduces the file output loop to:
for (auto& v : data)outputdata << v;
If you care about performance, you should make sure to run the Release target and change the project settings for the release target to use the optimization level -O3 (instead of the default -O2) and to add the additional compiler switch -ffast-math. -ffast-math gives up standard conformance for improved performance. This allows the compiler to perform more optimizations when floating point numbers are involved. As an example, it allows optimizing expressions like (x*1000.0)/500.0 into x*2.0 (or x+x), or x/3.0 into x*0.333... (multiplication is much faster than division), none of which would normally be allowed.
If you are compiling for 32-bit, then you should tune for an architecture that at least supports SSE2 (like Pentium 4) or for a generic SSE2 CPU with the switch -msse2 and additionally -mfpmath=sse which forces regular floating point computations to use SSE instead of the old x87 FPU stack.
But of course, all of that only matters for programs that perform more than just a few dozen computations on 4 sets.