Short Question

If two algorithms have the same order of complexity, does it matter which one you use? I'm leaning towards yes, but I can't really explain why, they would both solve the problem in te same amount of time right?
What situation are you thinking of? I'm pretty sure your answer would vary based on the algo in question.
How so?
Well for example, take the difference between quicksort and heapsort. Both have an average runtime of nlogn and, in general, quicksort is faster. However, heapsort's worst-scenario runtime is still nlogn, vs quicksort's n squared. So although in general, they have the same complexity, there are times when it is preferable to use one over the other. I believe introsort switches into heapsort if its quicksort recursion count exceeds a supremely high number.
So what algorithms are you thinking of?
EDIT: Disclaimer - I'm no sort algorithm expert and I simply find this data interesting. I personally prefer quicksort over heapsort in most cases but I do not claim to understand either of them, or any other algorithm, to any essential depth.
Last edited on
I wasn't thinking of any algorithm in particular, but it's a question our teacher gave us to think about. So basically it all depends on the data?
And what if they both have the same complexity, when you know quicksort won't run a worst-case scenario? Besides personal preference, would it still matter which one you used?
Last edited on
Well in my comment on quicksort, you know how quicksort works right? It's a recursive sort. It divides the container into two parts, one greater than and one less than a chosen pivot element. (The method by which the pivot is chosen can be critical but no matter how you choose it you could still end up with n squared complexity.) Once it's divided it up, it returns the concatenation of the lesser list, sorted by itself, the pivot and the greater list, sorted by itself.
So if it recurses beyond a certain point, it is no longer more efficient than heapsort, which is generally slower but has a worst time of nlogn. Heapsort works by heapifying the data (a heap is a storage container, check Wikipedia for more on that).
However I'm not sure how all algorithms work. Sorting algorithms are the most hotly debated because there are so many of them. That's why I presume that's what your question relates to. As I said, introsort is a hybrid algorithm that uses quicksort until it sees a point of recursion beyond which heapsort is just more efficient.
It depends on the data, yeah. In sort algorithms the ordering of the data and, in quicksort, the location of the pivot are critical to the speed at which the data is sorted. At the same time it also depends on the way the algorithm works.
In general quicksort is the best algorithm. It's not common that heapsort will be faster on matching sets of data. The method by which the pivot is determined is still debatable but I believe there is a preferred method for that as well.
EDIT: If you are really interested in sorting algorithms, someone on this site pointed me to this:
http://www.sorting-algorithms.com/
It's great and it has pseudocode plus animations for tons of common sort algos on various sets (reverse, nearly sorted, random, few unique).
Last edited on
Thanks for your answer, also for the website. I think I'm beginning to get it.
Depends. An algorithm has both time complexity and space complexity. Rarely do algorithms for the same purpose have the same time and space complexities. But even if we assume they are the same in this hypothetical case, there may be other factors affecting the decision of which one to use.
For example, among sorting algorithms there's the property of stability, where items of the same value retain their relative order.
If we're only concerned with performance, one algorithm can be faster than another even if they both have the same time complexity. This is because complexity measures the order of magnitude of rate of growth of the number of operations in relation to the size of the input. Even if two algorithms have the same complexity (say, O(n)), one could require twice as many operations as bits in its input, and the other could require four times as many. They're both O(n), but one will always take twice as many operations to complete.
That's why, in most cases, quick is faster than heap, even though they both have average nlogn time.
And if you happen to have a sorting algorithm with complexity O(n), I've got a million dollars waiting for you.
One algorithm might loop through n elements once (n) and another might loop through n elements twice (2n)--both would be called linear and written as O(n).
Topic archived. No new replies allowed.