Thank you for clarifying those. I have a few more questions relating to these, all conceptual still:
If I have two functions f(n) = 2^(n + 1) and g(n) = 2^n, what are the valid statements?
f(n) = THETA(g(n)) , g(n) = O(f(n)) , g(n) = OMEGA(f(n)), and f(n) = OMEGA(g(n)), more than one may be valid, but so far I have O(f(n)) as a valid one.
Given the asymptotic runtime for each code snippet, the answers use theta instead of big o just so there can only be 1 answer for each code snippet:
1 2 3
|
for (int i = 0; i < n; i += 3)
for (int j = 0; j < n * n; j++)
cout << "Yes" << endl;
|
I have THETA(n^2) but there also is THETA(n log n), THETA(n), and THETA(n^3)
same question, except the code now is:
1 2 3
|
for (int i = 0; i < n; i += 3)
for (int j = 0; j < 10; j++)
cout << "^_^" << endl;
|
this time around I have THETA(n^2) as well, but there is also THETA(n), THETA(log n), THETA(n log n).
Reorder the following functions from smaller asymptotic growth rate to larger asymptotic growth rate, using lhopitals rule:
1.5^n, n^1.5 + n ln n, 2^n, n ln n - sqrt(n), (sqrt(n))^2 + n ln n, n ln n + n^2
I have this order:
n ln n - sqrt(n), n^1.5 + n ln n, (sqrt(n))^2 + n ln n, n ln n + n^2, 1.5^n, 2^n, is that right?
If every time the inner loop for insertion sort ran in O(log n) time for some input, insertion sort would actually run faster than )(n^2), it would run at most O(n log n) time?
If the list assigned to insertion sort is already sorted, how many iterations will the inner loop run each time? I have O(n) but there is also O(1), O(log n), and O(n^2)
Mergesort can only work if the list size is a power of 2 because the merge operation cannot merge two lists of unequal sizes? I think its a no.
When running merge sort, if the sublist size eventually becomes 5 or smaller, and instead of breaking the list down further but rather run insertion sort on the size 5 or smaller list, what would runtime be? I have O(n^2) but there is also O(n log n), "this modification to merge sort would not actually sort the list," or "no conclusive answer exists"
This one is really tripping me out, "when the median is found in O(n) during the partition phase and chosen as the pivot for the partition phase of quicksort, the runtime can still be worse than O(n log n)?
For a balanced binary tree, the amount of leaves and the amount of total nodes are asymptotically the same? I think no.
If we wish to search for an element in a perfectly balanced binary tree, the worst case runtime will not exceed: ___
I have O(n), but there is also O(n^2), O(log n), O(n log n)
Lastly, for any binary search tree, the element with the largest search value will always be the rightmost leaf node, this I have as true. Does that sound correct?