Code Performance

How can I learn my code performance??

for example
I write a code but is it a good code or not?

I wanna learn a time
what time my code neeeds?

before my code started I write the system time and at hte end of the code I write again system time and subtract last - first system time this is my code time isn't it??
but this time is about second so ı didn't see real nano second time.

please help about this situation..

How can ı learn my code performace good or not??
Finer granularity time is typically OS-dependent.

On Unix, for example, the way profiling is done is via setitimer().

It's not quite as simple as doing a time delta as you are doing because in a multi-process environment, your program isn't necessarily executing the whole time -- other processes might grab some CPU, so what you would really be measuring is an upper bound.

But empirical time measurements are not often all that useful since they measure only the scenarios tested. A better approach is to look at your algorithm and determine its runtime efficiency using big-O notation.
thanks for your suggestion but I'm learning now big-O so I don't know very well and at the same time I don't now which algorith is good for my code ,so I wanna be measure the time
Also you are right my way is not show real time..

How and where can I learn writing efficiency code??
Big O notation is very important because it can give you a good estimate about how good your algorithm is. If you don't understand this, it is worth spending time learning. For example, if you find that your algorithm is O( n^2 ) when it could be done as O( log(n) ), then changing to the latter algorithm will help immensely in performance. Your first step before writing an algorithm should always be to make sure the big O notation is acceptable.

To learn about writing efficient algorithms, try and pick up an algorithms book or look up some different common algorithms and problems like: backtracking, dynamic programming, greedy algorithms, n-queens, and breadth first search. Studying data structures will also help immensely so that you know which data structure to use in certain circumstances. For example, sometimes a normal array may end up causing your algorithm to be painfully slow, but using a balanced tree would make it very fast. For data structures look up info about arrays, linked lists, binary search trees, stacks, queues, and hash tables for a start.
About the original question, clock() gives finer granularity than system time, but it's also system-dependent. For example, on Windows it's 1000 ticks/s, while on Linux it's 1000000 ticks/s. The exact value can be obtained from TICKS_PER_SEC. clock() returns the number of ticks since the start of the program.
Thanks a lot for your answers and suggestions
these answers is very very good
And actually newer Linux kernels changed the jiffy from 1ms to 10ms, so that probably affected the return value of clock()?

@ShyRain:

Typically when you figure out the "big-O notation" for an algorithm, it is with respect to the number of comparisons or the number of iterations, or the number of memory writes or memory reads or whatever else you want. Typically you choose the basis on whichever of the above will dominate your execution time.

Sorts, for example, are often compared in terms of the number of item comparisons. Bubble sort, for example, runs in O(n^2) time, which means that it takes no more than n^2 comparisons to sort the array.

You just have to look at your algorithm. In many cases the looping structure makes the "big-O notation" obvious. As a hypothetical example:

1
2
3
4
5
vector<int> v; // Assume this has some entries in it
for( int i = 0; i < 10; ++i )
    for( size_t j = 0; j < v.size(); ++j )
       for( size_k = 0; k < j; ++k )
           cout << ( v[k] * v[j] << endl;


Ok, stupid example, but to illustrate the point. Write, in big-O notation, the number of multiplications.

So j walks every element in the vector. For every j, k walks all the elements prior to it. So when j = 0, the k-loop does nothing. when j = 1, the k-loop does one multiplication. When j = 2, it does 2.

So (j=1 * k=0 ) + ( j=2 * k=1 ) + (j=3 * k=2 ) ... ( j=n * k=n-1)
You can use calculus then to reduce this infinite series to a simple mathematical equation in terms of n.

Then you'll find it is O(n^2).

Lastly, the outer for loop runs exactly 10 times, which makes the whole algorithm O(10n^2). And you know that O(kn^2) = O(n^2) for constant K>0.
Oh my God :))

you are wonderfull jsmith .

I am a student and we have math. and Discrete math. but I didn't know what they do..
Now I understand from your example thanks a lot
please open your private message I wanna meet with you like wonderfull helpful people :))
Last edited on
Topic archived. No new replies allowed.