#include <iostream>
#include <cstdlib>
#include <ctime>
#include <chrono>
usingnamespace std;
int main()
{
constint n = 5;
int vet[n];
int tmp;
srand(time(NULL));
for(int i=0;i<n;i++)
vet[i] = rand() % 100;
cout<<endl;
bool flag = true;
int stop = n - 1;
while (flag)
{
flag = false;
for (int i = 0; i <= stop; i++)
if (vet[i] > vet[i + 1])
{
tmp=vet[i];
vet[i]=vet[i+1];
vet[i+1]=tmp;
flag = true;
}
stop = stop - 1;
}
for(int i=0;i<n; i++)
{
cout<<vet[i]<<endl;
}
}
If I use
auto inizio = high_resolution_clock::now();
code....
auto fine = high_resolution_clock::now();
cout << duration_cast<duration<double>>(fine - inizio).count() << endl;
the time of sort is always 0 or like 0-20004555 not for example 0.0856 sec.
Perhaps is better here because I am a biginner, you can close the other post?
In any case you say me
"Your n=10. The amount of instructions to sort 10 values is minuscule even in the worst case. I bet your CPU can do much more instructions within one (high resolution) timestep.
"
I don' t understand well .. but if the n= 100 the time sort is always 0 is not possible...
MPG,
Let me try to explain.
if you run a piece of code that takes 0.00001 seconds to execute, and you measure that time in seconds, you will get 0 seconds even though your common sense tells you (correctly) that code MUST take some amount of time (however tiny) to execute.
That is what is happening. When I was a nerdly little kid learning to code, 100 elements in a bubble sort would have taken enough time to display a value for the time taken, sure. But fast forward to 2018. Your CPU stats have some sort of value on them like '3.0 ghz'. This roughly (there are complexities, but roughly) tells you how many things the CPU can do per second. So what is a ghz? A giga in metric is a billion. You computer can probably do between 2 and 10 billion instructions per second depending on a number of factors, but lets just take it at face value and say 3 billion per second.
Lets be generous with your bubble sort and say its N*N, and each iteration has to manage a loop variable, move some data, make a comparison, etc type things. The whole thing might be, I dunno, 500 cpu instructions or so (being generous again, it may be less).
so you want to do 500 things 100*100 times (N squared sort) ... which is 5000000 instructions. Divided by 3 billion, that is 0.00166.. or 1 ms. If your timer routine is incapable of showing a ms, it will just spew out zero. And I was being nice and giving your program a lot of instructions per loop iteration. Most likely its far less that 500, and who knows what optimizations were done, ... its probably running in a fraction of 1 ms, and that is just too fast to register with your timer. The easy way to see some numbers is to sort a larger array. The last time I tested my old sort routine, I had to do 10 billion doubles to get it to register enough time to see if changes to the code were making it better (I wanted to see a few trusted significant digits on seconds). Computers are just that fast.