normalizing matrix in row ways (Eigen)

Hello everyone!

I have matrix:
1
2
3
4
5
6
7
8
9
   typedef Eigen::Matrix<F_TYPE, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> matrix_t;
    matrix_t m(2, 3);
    m(0, 0) = 0.5;
    m(0, 1) = 0.2;
    m(0, 2) = 0.01;

    m(1, 0) = 0.6;
    m(1, 1) = 0.8;
    m(1, 2) = 0.79;


matrix m:
[0.5, 0.2, 0.01]
[0.6, 0.8, 0.79]

now I want to normalize each row between 0.0-1.0 using Eigen's functions so
the matrix would look something like that:

[ 1.0, 0.4, 0.02]
[0.75, 1.0, 0.98]

 
matrix_t m2 = m.cwiseAbs().rowwise().maxCoeff()


matrix m2:
[0.5]
[0.8]

now all I need to do is somehow divide m cols by m2 cols in row ways using Eigen.
I'm not to matrices and I couldn't figure out how to do this with that library.

any ideas?
Thanks!
Last edited on
1
2
3
4
5
for (int i = 0; i < 2; i++){
    for (int j = 0; j < 3; j++){
        m(i, j) /= m2(i);
    }
}
?
Last edited on
Hi, I was hoping that Eigen has a magic function for that.
I don't think that would be fastest way to do it.

1
2
3
4
5
6
7
8
9
auto *something = m.data(); // returns F_TYPE what is defined as float
// is there anyway to get typename or variable type out of matrix so i can initilize without using auto?

for(int i = 0; i < m.rows(); i++){
    something = &m(i, 0);
    for(int j = 0; j < m.cols(); j++){
        something[j] = m2.data()[j];
    }
}


Good thing about Eigen lib is that you can make those matrix calculations run in mutliple threads just with one function call in main function.

I don't think I'm advanced enough in c++ to understand how they do it and I do need to run the same matrix calculation what we see here in multiple thread because it will be really really big and I'm afraid that my way of making it run in mutliple thread using std::thread would be slower.
Good thing about Eigen lib is that you can make those matrix calculations run in mutliple threads just with one function call in main function.
I'm not sure if that's true. Certainly not for such small matrices, but it is true that Eigen can do automatic vectorization (i.e. SIMD) through template metaprogramming.
Last edited on
Topic archived. No new replies allowed.