### selection sort -complexity

 1234567891011121314151617181920212223  #include using namespace std; main() { int a[10]={5,8,0,1,4,2,4,3,9,7}; int i,j,min,temp; for(i=0; i<9; i++) { min=i; for (j=i+1;j<10;j++) { if (a[j]

I think so about basic steps:
1+ n +(n-1)[1+1+n*(n-1) (1+1+1+1+1+1+1)]

i=0 1
i<9 n
i++ n-1
min=i 1
j=i+1 1
j<10 n
j++ n-1
if (a[j]<a[min]) (n-1)
min=j; (n-1)
}
temp = a[min]; (n-1)
a[min] = a; (n-1)
a = temp; (n-1)
}

It is right?

Why Ω(best), θ(average) and O(worst) are all =n^2?

Last edited on
Answers in big-O notation look like O(N) or O(N^3) or O(lg(n)) and so on.

Are you expected to express your result in standard big-O notation as above??

I am honestly not sure what to make of [stuff] in your result.

if you put a counter in there at line 14, you get 45 assignments for your sample data.
That isnt too useful, but it gives you a ballpark.
Last edited on
@jonnin
Big Omega (Ω) records the lower bound on resource growth (best case scenario).
Big O (O) records the upper bound on resource growth (worst case scenario).
Big Theta (Θ — mpg used the wrong symbol [the lowercase theta]) only exists when Ω ≡ O.

So selection sort has Θ(n²) behavior, which means that it is also Ω(n²) and O(n²).

@mpg
It is because the behavior of selection sort does not depend on its input. It will perform the same number of loops no matter what you give it.

And since it is a nested loop-in-a-loop over all elements algorithm, it has n² behavior.

(It has Θ(1) memory usage.)

I made an animated GIF you can watch here:
http://www.cplusplus.com/faq/sequences/sequencing/sort-algorithms/selection-sort/

Hope this helps.
> http://www.cplusplus.com/faq/sequences/sequencing/sort-algorithms/selection-sort/
> Fast for small data where writes are slow
not sure if understand what you mean, ¿may provide an example?
its saying the swap only happens one time for each item. If the swap is the bottleneck due to big, fat objects, that can be useful as it avoids excess copies found in some of its peers.
so i guess it shouldn't be small data
Well, the fatness of the object may have less to do with its size than how much effort it takes the hardware to make the write

And "small data" means "few items", not necessarily small items, though in embedded systems that is often true as well...
Last edited on
Excuse can you write a table of my code as this about all basic steps?
https://www.dropbox.com/s/8a77ho6vvwpn7zu/Passi%20base.jpg?dl=0
In some notes given by the professor I read""The calculation of asymptotic complexity is defined by the block of major complexity"
So for example 2n^2+2n+2 has asymptotic omplexity n^2 , you are agree?
And I don't understand well why
"Θ(n²) behavior, which means that it is also Ω(n²) and O(n²)."
Why n^2 for all, really I don't understand....

Here I read this abot what happens in selection sort
 123456 8 5 7 1 9 3 // (n-1) first smallest 1 5 7 8 9 3 // (n-2) second smallest 1 3 7 8 9 5 // (n-3) third smallest 1 3 5 8 9 7 // 2 1 3 5 7 9 8 // 1 1 3 5 7 8 9 // 0 

total comparison= n(n-1)/2 ~ O(n^2)
But I don't understand also here and why n(n-1)/2 and ~ O(n^2)

"
In selection sort outer for loop execute n-1 times.
INvoke swap function once at ech iteration
Total swap=n-1
Total moves 3*(n-1) (each swap has three moves)
Th inner loop executes the size of unsorted part minus 1 (from 1 to n-1) and in each iteration we make one comparition
#of key comparition 1+2+...n-1= n*(n-1)/2
"
Why n*(n-1) 2????????
And why the best , average and worst case are the same and are all O(n^2)???
Last edited on
And why the best , average and worst case are the same and are all O(n^2)???

you are off in the weeds and out of the code, is why this is not making sense.
just look at the code for a min, and forget entirely everything you are doing about complexity. LOOK at the loops. Do you see anything in there that reduces or increases or changes the number of loops that happen based off what is in the array? (the answer is: no). If the work increased when data is in some special order, worst case would be different, but nothing like that happens. If the work were reduced by the array already being sorted, the best case would change. That also does not happen!

0,1,2,3
how does sel-sort deal with the above? It looks at 0, is it the smallest? then it looks at 1 to make sure 1 isnt smaller, then 2, then 3. 0 was the smallest, but it still looked at all of them. Then it executes the swap, nevermind that it does not change anything, the swap statements still happen regardless!

One of these N*N algorithms does no work (it looks at everything once, is all) on a sorted array (insertion sort, I think, but my memory is not trustworthy anymore). Maybe looking at that example will help you see the ideas here.

Last edited on
 12345678910 for(K=0; K

unroll the outer loop
 123456789 for(L=1; L

so you have
(n-1) + (n-2) + (n-3) + ... + 1 = \sum_{K=1}^{n-1} (n-K) = \sum_{K=1}^{n-1} K = (n-1) n / 2
https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF

> And why the best , average and worst case are the same and are all O(n^2)
look at your code, the only part that depends on the values of the array is
 12 if (a[j]
one assignment of two indices (integers)
the best case would be for the assignment to never execute, that is, for a[j]<a[min] to always be false (an already sorted array)
and the worst case is for the assignment to always execute
¿what's the complexity in each case?

suppose that you have
 12345678 for(K=0; K
say that foo() and bar() are both O(1), ¿what's the complexity of each loop?
I don't understand
1)for(L=K+1; L<n; L++)
{
//things in this body will be executed `n-(K+1)' times
}
//...
}

2)for(L=n-1; L<n; L++)
//... //n-(n-1) = 1 times

3)I never studied this:
(n-1) + (n-2) + (n-3) + ... + 1 = \sum_{K=1}^{n-1} (n-K) = \sum_{K=1}^{n-1} K = (n-1) n / 2 so it's the problem to understand why n(n-1)/2
4)About worst, avarage and best case:
The time complexity for selection sort program in worst case , best and average case is always O (n2) because the number of comparisons for these cases are the same?
Last edited on