#include <iostream>
#include <algorithm>
void bubbleSort( int arr[] , int index )
{
for( int i = 0 ; i < index - 1 ; i++ ) {
for( int j = index - 1 ; j > i ; j-- ) {
if( arr[j] < arr[j - 1] ) {
std::swap( arr[j] , arr[j - 1] );
}
}
}
}
int main()
{
int arr[7] = { 2 , 1 , 5 , 4 , 7 , 8 , 3 };
bubbleSort( arr , 7 );
for( int i = 0 ; i < 7 ; i++ ) {
std::cout << arr[i] << " ";
}
}
I have a difficult time analyzing the running this of this particular bubble sort algorithm. I understand exactly how this algorithm sorts the items, but I don't have the knowledge to determine the exact running time of this algorithm.
I'd say this is very close to O(n^2) for both best and worst case where n is the input size.
Best Case: Items are already sorted. Yet, both for loops will still iterate.
Worst Case: Given list is sorted from largest to smallest.
Here is the problem. O(n^2) usually involves an algorithm where 2 nested for loops iterate n times. But if you look at the code above, the second for loop will not always iterate n times. As the value of i increases, the number of times the second loop iterates decreases because it always starts from the end (index) down to i.
This is close to O(n^2) but it's slightly faster than O(n^2) in my opinion. So what Big Oh notation would describe this behavior? What kind of mathematics can you use to analyze this algorithm?
Thank you for your help in advance and please forgive my lack of experience and knowledge in this area.
Hmm, well, the best case for bubble sort is O(n) (linear time), the average case would be O(n^2) (quadratic time) and the worst case would also be O(n^2).
Now in your problem, you cannot just multiply the number of iterations of the outer loop times the number of iterations of the inner loop, because the inner loop has a different number of iterations each time. So you can think about how many iterations that inner loop has.
You should be able to see that the total number of times the sequence of statements executes is: N + N-1 + N-2 + ... + 3 + 2 + 1. The formula for this is O(n^2).
The best case for an optimized bubble sort is O(n); that requires a flag to determine if any swaps were made, and if not to exit (because the list is already sorted).
The given algorithm does no such thing; in fact, the only difference between data sets is the number of swaps.
The algorithm is still O(n^2). The thing about Big O notation is that it doesn't care about details; it describes the overall behaviour of the program. In your example, the number of iterations could be described as S(n-1), which is equivalent to (n^2 - n) / 2. We ignore everything bar the largest term, the n^2, since as the n grows the other terms become increasingly irrelevant.
Remember; O(n^2) is just a measure of time growth, not execution speed. For small data sets, it is entirely possible for an O(n) algorithm to be slower than an O(2^n) algorithm, but as the data sets grow the second algorithm quickly becomes infeasible, as any coefficients (no matter how large) become irrelevant.