runtime: writing/reading datafile VS allocating storage for huge matrices

Dear all,
I am going to write a program that has to calculate with huge matrices (sometime sparse, sometimes not) of size than more than 10.000 x 10.000
Therefore it is for me important by planing my program right now to know how efficient it is to store (only the nonzero entries) such matrices in a data file and work with this file instead of using for example a matrix directly stored as: vector < vector < double >>

Maybe you see any other, more efficient way, to handle this matrix (if possible with sparcity)?

Maybe there are some people online who have already some experience with efficiency in C++?

Thanks for your future help.
Topic archived. No new replies allowed.