And actually newer Linux kernels changed the jiffy from 1ms to 10ms, so that probably affected the return value of clock()?
@ShyRain:
Typically when you figure out the "big-O notation" for an algorithm, it is with respect to the number of comparisons or the number of iterations, or the number of memory writes or memory reads or whatever else you want. Typically you choose the basis on whichever of the above will dominate your execution time.
Sorts, for example, are often compared in terms of the number of item comparisons. Bubble sort, for example, runs in O(n^2) time, which means that it takes no more than n^2 comparisons to sort the array.
You just have to look at your algorithm. In many cases the looping structure makes the "big-O notation" obvious. As a hypothetical example:
1 2 3 4 5
|
vector<int> v; // Assume this has some entries in it
for( int i = 0; i < 10; ++i )
for( size_t j = 0; j < v.size(); ++j )
for( size_k = 0; k < j; ++k )
cout << ( v[k] * v[j] << endl;
|
Ok, stupid example, but to illustrate the point. Write, in big-O notation, the number of multiplications.
So j walks every element in the vector. For every j, k walks all the elements prior to it. So when j = 0, the k-loop does nothing. when j = 1, the k-loop does one multiplication. When j = 2, it does 2.
So (j=1 * k=0 ) + ( j=2 * k=1 ) + (j=3 * k=2 ) ... ( j=n * k=n-1)
You can use calculus then to reduce this infinite series to a simple mathematical equation in terms of n.
Then you'll find it is O(n^2).
Lastly, the outer for loop runs exactly 10 times, which makes the whole algorithm O(10n^2). And you know that O(kn^2) = O(n^2) for constant K>0.