Algorithms - Big O

Lately I've been trying to learn about algorithms, particularly sorting and searching algorithms. Generally I have found it quite difficult to grasp - not the concept of an algorithm, by my understanding that's simply a clearly defined set of instructions needed to accomplish a task - but rather how these algorithms are devised and implemented into code, and how the behind the scenes processes work. This is especially true when multiple recursive calls and stacks, with constant pushing and popping, are involved, such as in merge sort.

One of the key terms I've come across is Big O. By my understanding, Big O is used to show the worst case run-time performance of an algorithm. As I'm not very confident in my understanding, I'd like to describe what I know and hopefully get some feedback to see if/where my understanding falls through.

Big O(n) means that the data being processed is processed n times, n being the total number of items in the data structure. This is usually the case with a single iterative loop through a data structure, such as an array.

Big O(n^2) means that the data in a data structure must be processed n^2 times, first n times in an outer loop, then n times in each successive loop of the inner loop. Since an inner loop always has a maximum iteration of n^2, the worst case performance must be Big O(n^2).

Big O(log n) is something I'm not very clear on, I only know it usually applies to divide and conquer algorithms.

Big O(n log n) is also something I don't understand.

Those are a few examples of Big O that I have seen. Would anyone be willing to let me know if my descriptions are accurate, as well as giving a description of what Big O(log n) and Big O(n log n) mean?
Last edited on
Thanks LB.

It's probably me being dense, but it felt like the answers on that page were overly complicated, unnecessarily verbose and full of obfuscating technical jargon. That generally seems to be the case to me whenever I go onto stack overflow to find answers. Of course, the reason I think that is most probably because of my own limited comprehension, not necessarily other people's inability to clearly and simply explain matters.

The reason I enjoy asking questions on this site is because the people here are friendly, reasonable and genuinely try to do their best to help you.

This is in no way a criticism of your efforts to help by providing the link - by your very post count I can only imagine the number of difficulties you've helped people through - but I've gone onto stack overflow many times and have generally been left feeling more confused on a given matter than before having looked. This is mostly true of the thread you provided as well.

Now, this is not me telling you not to give me links to stack overflow or not to help, I would not presume to refuse anyone's help, especially one as knowledgeable as you. I'm simply letting you know my difficulties in dealing with stack overflow in particular. In general, I'm better able to process information when working in contact with people, it helps to keep me focused as I know that people may be waiting to hear of my progress.

On that note, my apologies for not getting back to you sooner, I have a few deadlines coming up and, you know, stress!

Anyway, to sum up, I appreciate you and everyone else's efforts to help. It makes things much easier knowing that you've got the support of a great community when you need it.

------------------------------------------------------------------------------------------------------

Oh right, I also have a question. This wasn't addressed in the link you provided and I haven't been able to find a decent explanation elsewhere.

Say I have two algorithms, one with Big O(n log n) and another with Big O(n^2), would the overall time complexity be Big O(n^2), the worst of the two cases, or would it be a combination of the two? If it is a combination of the two, how would you calculate that?
Last edited on
If you have two algorithms, you have two algorithms. But, in general, the time complexity for Big-O is the one that makes more of a difference. You could say the algorithm is O(n2), and then decide whether to mention that below a certain value of n the complexity is closer to O(n log n) - assuming that's actually the case.

I would try to explain better but quite honestly, this is one of those things I never got around to fully learning. I only know the basics. Algorithms aren't really my thing; I focus more on best practices and high-level class design.
Last edited on
No worries LB, thanks for the feedback.
Just to add to what LB said, if you have a function that makes use of both algorithms. That is, it calls a function that has complexity nlogn then it calls another function which has complexity n2. Then you will say that the function as a whole has complexity O(n2); that is, you take the fastest growing and call that the complexity of your function.
Just to see if I understand you correctly, does that mean that if I have several functions with varying time complexity, that the overall algorithm will have the time complexity of the function with the worst time complexity?

Eg;
Func1 = O(n log n)
Func2 = O(n^2)
Overall = O(n^2)
Yes, so if you have a function Main that calls Func1 and Func2 and then returns, the big Oh complexity of function Main is O(n2)
Thank you, I understand now.
Topic archived. No new replies allowed.