Ok I have been in CMPT classes for a few years now and still do not understand this. I get the concept of what Big O "is" or what is use is for but I do not understand how to determine the Big O for algorithms. For example on a test quiz we had the question to ask
I dont mind posting this as it was a test case assignment but I dont get it. It was the only portion I could not figure out. I have read up on it about 100 times on 1000 different pages using coding as an example but still dont quite get how to determine it.
Sorry to ask something that is probably so simple to most of you. I dont really need the answers to these questions but I would not be hurt if you used them as an example for me.
It is as simple as discarding all monomials out of the polynomial except the monomial with the highest order. In other words, you don't care about anything except the monomial that has the highest growth. Like solving a limit where n tends to infinite.
The purpose is to give people an idea of how costly the algorithm is. But that is usually not all. What if some algorithm has O(n2) or O(5n2)? We already know the exact mathematical formula for the cost has been already simplified to the largest-growth monomial, so does the 5 really matter? Most people say the constant doesn't matter so Big-O also removes those, meaning that O(5n2) is not correct, and it should have been expressed as O(n2).
so to take a few more examples (just to ensure I understand)
If I take for example
3^n/n +n^3 would = O(n^3) because n^3 is larger than 3^n/n if n>=1
adding in a factorial such as
100n + n! + n^2500 (arbitrary huge number of 2500) would be O(n^2500) as n^2500 is larger than n! or 100n
doing the log ones might be my downfall here
such as like
n^2/1000 + log(n) log(n) = log(n)^2 when dropping the n^2/1000 so = O(log n)^2 (trying to find an example using more than 1 log again to get this right)