This is about a C++ problem.
I have an object tracking program that takes images from 0,...,n in a loop. At current frame the computations are based on previous frames, therefore, I need to hold those variables, matrices, etc for later use. This program has to be integrated now into another system which will provide an image and I have to return the tracking output. The system does later other processes, so my program has to become function to distribute as DLL
I need to store my variables and matrices from previous images in order to use them again. I don't know if the best practice is to write them in hard drive and read them again in another instance. If this is the case what is the best way and data type/file to write/read. The systems aims to be real-time.
I will try to use deques, however, my problem is that once I obtain the location of the blob I will have to exit from the function (in DLL) and store somewhere the matrices for the next image.
What about storing everything in a single structure which could be the return of my tracking function? Any suggestion for learning structures?
You could just hold the matrix outside the function and pass it in each time:
1 2 3 4 5 6 7 8 9 10 11
void process_image(IMAGE& image, Matrix& m)
{
// ... awfully clever stuff..
}
Matrix m; // The scope of the Matrix is outside the function
for(std::deque<IMAGE>::iterator i = images.begin(); i != images.end(); ++i)
{
process_image(*i, m);
}
Thanks Galik, but I am talking of about 50 matrices, which are not required for the other process only for my function when is called again, 0.04 seconds later :)
I know in Matlab you can pack variables in the hard drive to free the RAM. Can we use something similar in C++?
About 15 MB. They are probability distributions, models and gradient matrices.
My function will be part of a bigger system containing behaviour analysis based on image and video, plus avatars animation. By real-time in this subject we mean to be able to process 25 frames per second or an image in 0.04 seconds maximum.
Therefore I have to be as stingy as possible, otherwise the other guys will complain. We actually aim to work with a webcam, therefore we can cheat the frame rate but not to much.
I think you are going to struggle if you read and write your matrices to and from disk every image. So much disk access may disrupt other users more than consuming RAM. And 15M is not so totally huge. For image processing, I'd want to keep them hanging around. How big are the image frames?
Well, at least I can be certain that a deque was the better choice.
Maybe one could combine the two to save on memory and not sacrifice performance? Read from the disk and build up a buffer that gets emptied as each image is read and expanded with time so that the readout from memory reaches the end of the buffer at the same time as the readout from the disk?
Nah, that has a fair difficulty. Assuming you have around 1G of free memory, I see no reason why just using a deque wouldn't work.
Guys, my program loads in total 15MB of the RAM when running. This may change a bit depending on the size of the images/frames, but I have to deal with images only and beginning and end of the tracking, since I use fix normalization of the images for the mathematical modelling and tracking, let's say 80x80 (32bit doubles).
I have no problem if my program runs alone, but now it will be integrated to others processing sound, voice, 3D animation, classification and everything underneath of a chatting interface.
It is actually about an avatar that chats with the user either by voice to voice or typing and voice. So we need to run all modules at same time.