after using the integrate_times and a Runga-Kutta stepper the result is good and fairly fast.
It is guaranteed that putting the above system definition in a for loop will slow down the solution dramatically. A system definition like this:
void system( const state_type &x , state_type &dxdt , double t )
{
for (int i=0; i<3; i++)
{
for (int j=0; j<3; j++)
{
dxdt[i] += matrix[i][j]*x[j];
}
}
is very slow indeed. The matrix is not symmetrical in which case one of the competing method is the Schur decomposition of the matrix and then the exponential of the upper (or lower) triangular matrix will lead to the solution.
My question is whether there is better way of defining a system of ODEs in odeint c++ than hard coding the equations themselves.
No it is not. I doubt that it will take much longer than hard-coding as you are doing exactly the same number of multiplies (unless the hard-coding omits a lot of zeroes). You can turn optimisations on as well.
Your code is wrong: you should initialise dxdt[i] to 0 before commencing adding things to it in the j loop.
If the elements of matrix[i][j] are constants - i.e. the system is linear - then you could solve this system without resorting to an ODE solver at all.
Thanks for the answer, you are absolutely right the above code was wrong and after making the suggested changes the result is awesome. It is just as fast as the hard coded one. This is great. Thanks again.
The matrix elements are constants but the matrix itself is NOT symmetric (may not be diagonalizable). So there is a little danger that the eigenvalues get complex. The Schur decomposition with QR would work but I do not want to bother with the exponential of a an upper triangular at the moment. The Taylor expansion I do not want to even touch. The Schur-Parlett algortim is good but coding it is longer than using odeint.
Do you have any better solution than the mentioned ones?
If you can diagonalise the system then you can write down the solution in exponentials in the basis for which the matrix is diagonal. If you can't diagonalise it then standard 4th-order Runge-Kutta should be Ok.
The issue is more whether you can diagonalise the matrix.
I'm afraid that I'm stubborn about (not) using libraries: I like to find out how the algorithms work and code my own. I guess the most likely linear-algebra libraries would be BLAS and LAPACK (free). If you are working at a university or research institute you might have access to the NAG libraries (definitely not free).
For most of these algorithms my favourite book references are
"Numerical Recipes: The Art of Scientific Computing", 3rd Edition, 2007 - Press, Teukolsky, Vettering and Flannery.
"Matrix Computations", 4th Edition 2013 - Golub and Van Loan.