I have a question. I would like to actually have some measure of roughly how long it takes to do a fwrite to a drive.
when i do the following:
clock_t begin = clock();
unsigned long long size_t = fwrite(send, 1, transfer_size*sizeof(unsigned long long), wpFile);
clock_t end = clock();
double long elapsed_secs = double long(end - begin) / CLOCKS_PER_SEC;
Unfortunately, I don't get any different result for different transfer size!!!
My guess is that the clock_t , once it issues a fwrite command, some how stops its measurement, and it comes back again, when I am already done with fwrite. I do get the almost same measure, whether my transfer size is 32KB Byte or 16MB ! Which I was indeed expecting to see a huge difference. I wouldn't really want the exact real timing measure (well off course it will be nice to know); and all I care about is to see some difference in time whether I am doing KB transfer vs MB transfer.
Does any one know of any other function that will give me some rough measurement of the actual time being elapsed for fwrite function?
I am using Visual Studio 2010.
And the type of application that I am using is a Window form application.
My code so far work fine, once I add
#include <chrono>
It complains saying : error C1083: Cannot open include fine 'chrono'
I tired this, but still not successful.
I put the code above and my operation is this:
unsigned long long size_t = fwrite(send, 1, transfer_size*sizeof(unsigned long long), wpFile);
whether I do 4KB of data of 512KB of data the answer is still 1ms.
I really think there is no way for windows to have a lower resolution of 1ms.
The issue is that I can put a for loop around my operation to do let's say 1000 times each.
But the problem is that since EACH single operation (whether 4KB or 512KB) is less than 1ms, and is rounded to 1ms, and therefore looping it also results in the same time for both 4KB or 64KB.
Is there anyway that I can measure anything lower than 1ms?
Assuming you're doing this to profile... you're approaching this problem the wrong way.
Is there anyway that I can measure anything lower than 1ms?
Even if there is, the results will be unreliable.
If you're doing this for profiling puroses, any time under 1 second (read: a full second), shouldn't be taken very seriously. Modern computers are multitasking monsters, and there are too many outside factors that can have subtle impacts on performance. When dealing with these fast speeds, those subtle impacts get exaggerated.
For example, code taking 1 ms longer to run due to outside process interference or disk usage or <insert other issue here>. When your entire run time is 1 second, that outside performance only accounts for 0.1% of the total execution time. When the entire run time is 2 ms, it accounts for 50%.
Rather that looking for a higher resolution timer... you probably should just increase your iteration count and/or read/write size.
Thanks a lot,
QueryPerformanceCounter function and QueryPerformanceFrequency function actually worked fine.
In fact I am getting very different measures for writing a 4KB and 32KB, which is really nice.