yeah, i also feel like putting this in the lounge.. my two choices are quick sort and merge sort.. anyway do you have any examples to show? or prove rather
It depends on the amount of elements/nodes in the array/list, doesn't it?
O(n) - As the data doubles in size, the algorithm takes twice as long to run.
O(n2) - As the data doubles in size, the algorithm takes four times as long to run.
O(n3) - As the data doubles in size, the algorithm takes eight times as long to run.
O(log n) - As the data size increases, the amount of time the algorithm takes to run increases, but increases less and less as the size increases more and more.
O(en) - As the data size increases, the amount of time the algorithm takes to run increases exponentially, becoming very long very quickly.
Quicksort is generally the best algorithm. Heapsort has the advantage of worst-case O(nlogn) runtime, but it is, in general, slower than quicksort (whose worst time is O(n^2)).
In general, Quicksort is the best algorithm. That's why it's called Quicksort.
With Mergesort a close second. Mergesort is nice because it always runs in O(nlogn) time, but the actual operations themselves tend to be slower when compared with Quicksort, so that's why Quicksort is usually prefered. When you're dealing with data on disk however, Mergesort is definitely the way to go.
Is it? I didn't know. What about insertion sort? I don't know much about sorting algorithms; it's someting I've only recently become interested in. I'm gonna go practice...
uhm.. are your saying that merge sort is slower because it is recursive?
When you're dealing with data on disk however, Mergesort is definitely the way to go.
why?
Mergesort is slower because the operations involved (splitting the data, and merging them again) are expensive operations, whereas quicksort is generally done in-place using simple pointer arithmetic and swap operations. Both algorithms are best-case O(nlogn) but because Mergesort main operations are slower, it is a slower algorithm. When mesuring algorithms in Orders/Thetas/Omegas it's always in terms of the number of main operations involved in the operation, not the time it takes to complete these operations. Quicksorts operations are much faster so it is generally a faster algorithm even if they are both O(nlogn).
Mergesort is preffered for ondisk data, because you don't need to hold the entire data in memory to perform the sort. You can have it split up across a disk or even multiple disks. I remember doing a lab in University where we had to sort data across a dozen magnetic tapes. Mergesort was the only way to do it efficiently.
The most popular version of Merge Sort is out-of-place. Of course, there are a million and one ways to implement any algorithm, so in-place methods do exist. But the most popular implementation is out of place.
The out of place implementation is also the only way to sort data across a dozen magnetic tapes ;).