Questions relating to algorithms, sorting, linked lists, and functions

Should an algorithm with a better worst case runtime should always be used over an algorithm with a worse worst case runtime?

If the user runs a linear search on a linked list, would the runtime be an O(n)?

Does binary search run in O(n) time when the list is unsorted?

When running insertion sort and merge sort on a small list, the real time would be about the same even though merge sort has a better worst case asymptotic runtime?

Is linear search a better choice than binary search if the list is unsorted?

Lets say we have a head pointer that points to the first node in a doubly linked list, what would the runtime be to perform a head insert, assuming n is the number of nodes in the list? I think it would be O(logn) but have also heard it could be O(nlogn) or O(n) or O(1).
1) no. most questions with an always (or any absolute) are false, test taking skills 101. That aside, for very small data sets (like 3 items for an extreme example), N*N is small enough that writing a 20 page complex thing instead of a 2 liner is pointless.

2) yes.

3) BS does not work on unsorted data. The exact result is undetermined, but it will usually yield the wrong answer, except by accident (eg busted clock being right twice a day)

4) define about the same? In human terms, the answer is yes. but .0001 and .01 seconds are both a blink to a human, but one is a lot better, see?

5) see above

6) is the list sorted? Its O(1) to insert into unsorted lists. Its N to insert into a sorted list. Why do you think log n? You have to find where it goes, what if it goes in the last position, how do you get there so fast?

these are very basic questions ... esp the binary search ones. Do you understand HOW binary search WORKS? Do you have a conceptual question you need help with?
Last edited on
Thank you for clarifying those. I have a few more questions relating to these, all conceptual still:

If I have two functions f(n) = 2^(n + 1) and g(n) = 2^n, what are the valid statements?
f(n) = THETA(g(n)) , g(n) = O(f(n)) , g(n) = OMEGA(f(n)), and f(n) = OMEGA(g(n)), more than one may be valid, but so far I have O(f(n)) as a valid one.

Given the asymptotic runtime for each code snippet, the answers use theta instead of big o just so there can only be 1 answer for each code snippet:
1
2
3
for (int i = 0; i < n; i += 3)
  for (int j = 0; j < n * n; j++)
    cout << "Yes" << endl;


I have THETA(n^2) but there also is THETA(n log n), THETA(n), and THETA(n^3)

same question, except the code now is:
1
2
3
for (int i = 0; i < n; i += 3)
  for (int j = 0; j < 10; j++)
    cout << "^_^" << endl;


this time around I have THETA(n^2) as well, but there is also THETA(n), THETA(log n), THETA(n log n).

Reorder the following functions from smaller asymptotic growth rate to larger asymptotic growth rate, using lhopitals rule:
1.5^n, n^1.5 + n ln n, 2^n, n ln n - sqrt(n), (sqrt(n))^2 + n ln n, n ln n + n^2
I have this order:
n ln n - sqrt(n), n^1.5 + n ln n, (sqrt(n))^2 + n ln n, n ln n + n^2, 1.5^n, 2^n, is that right?

If every time the inner loop for insertion sort ran in O(log n) time for some input, insertion sort would actually run faster than )(n^2), it would run at most O(n log n) time?

If the list assigned to insertion sort is already sorted, how many iterations will the inner loop run each time? I have O(n) but there is also O(1), O(log n), and O(n^2)

Mergesort can only work if the list size is a power of 2 because the merge operation cannot merge two lists of unequal sizes? I think its a no.

When running merge sort, if the sublist size eventually becomes 5 or smaller, and instead of breaking the list down further but rather run insertion sort on the size 5 or smaller list, what would runtime be? I have O(n^2) but there is also O(n log n), "this modification to merge sort would not actually sort the list," or "no conclusive answer exists"

This one is really tripping me out, "when the median is found in O(n) during the partition phase and chosen as the pivot for the partition phase of quicksort, the runtime can still be worse than O(n log n)?

For a balanced binary tree, the amount of leaves and the amount of total nodes are asymptotically the same? I think no.

If we wish to search for an element in a perfectly balanced binary tree, the worst case runtime will not exceed: ___
I have O(n), but there is also O(n^2), O(log n), O(n log n)

Lastly, for any binary search tree, the element with the largest search value will always be the rightmost leaf node, this I have as true. Does that sound correct?
N^3. why? the inner loops is n*n, the outer is n/3, ignore the constant

the second one, the inner loop is constant and the outer is N/3, ignore the constants... N

lhopital may have been a genius. But screw that. Just put a big number into each one (the same big number, like 1e20 or something) and sort the answer. This isn't worth doing calculus or anything fancy, if you own a calculator... I don't know if you got it right, but just do a quick check this way and see for yourself. I had a whole class on the calculus of convergence and divergence, and there was not a single problem they gave us that could not be resolved with a calculator and 1/3 of a brain.
-- I am also not sure how to read it, so I took it as comma separated? in that case I got (by eyeballing, again, you really should check it with a calculator)
n ln n - sqrt(n) ///nlgn minus something whatever
(sqrt(n))^2 + n ln n ///nlgn + n
n ln n + n^2 /// nlg n + bigger than n
1.5^n
2^n
n^1.5 + n ln n //this is insanely large. they are screwing with you, this is n*sqrtn + junk



I think the insertion thing is true. I will be honest, I have long ignored all the N*N sorts, they serve no purpose, and I don't recall exactly what loop is what because I have not needed to look at it in decades. the only sorts I really care about are counting (O(N), shell (varies, N^(z+1/z)ish ), merge (files, big data, stuff), and intro (modified quick, the best known today). Most of the rest of them are just classic programming classroom stuff.

yes, insertion on a sorted list is N.


algorithm stuff, things that you may not have known:
- merge is perfectly capable of merging unequal lists if written properly. If you have a specific example where it is written poorly, perhaps that would be an issue.

- insertion sort is N*N but its only 5 elements. again, if there were 128 elements, what happens? You cut in half a bunch of times... 64, 32, 16, 8, ..size 4, stop to do insertion sort. How many size 4 lists are there? 32 of them. so 32 times, you do insertion sort for N = 5. After that you merge back out and do 16 merges on size 8 lists, then 8 merges on size 16 lists, ... what is doing MOST of the work here? (add up all the merges...) the majority of the work being done is going to fall under the merge sort's worst run time cost.

no matter what pivot your pick for quicksort, there exists one or more arrays that can degenerate into N*N run time, if I remember the theory correctly. I am pretty sure that is where it ends up, its just very unlikely to get the bad data for well chose pivot strategies.

what is a balanced tree? The answer lies in knowing that.

a binary tree works exactly like a binary search. why? what is the algorithm used to search it? This answers the next question too.

and now a practical question :)
which sort do you use if you want to multi-thread a large amount of data sorting on a multi-core computer. Assume you have merge (not mergesort, just the merge part) that can reassemble whatever you split out (also threaded). Does this change the answer from when you do it single-threaded? What is the limited resource?
Last edited on
Topic archived. No new replies allowed.