I have a 1024 by 1024 dataset and I've written a piece of code that tokenize line by line and storing them elements by elements which is extremely slow.
The dataset is available in .txt and .dat format. Could you please teach, guide or advise me on how to import the data in a faster manner? Thanks a million in advance.
Thanks a lot for the efficient code JLBorges and thanks all!! If there is other person who is facing this problem please remember to not display element during each retrieval, it cuts off a lot lot lot of time. =D. Have a nice day.
One more comment. If you had full control over the format of the data file, it could be more efficient to use a binary format rather than ordinary text.