I've never really put any thought into analyzing my own algorithms for their scalability, but I'd think being able to do that is a good skill to have. As I won't be formally taking any computer science classes for a year or so I was hoping there were some good online resources for learning these things on my own. I know of Big-O notation, but I think I read that there are others used to be more precise in different situations, something like big theta? I've done a few searches on google, but most things I've found have been forum posts where people suggest approximation. Anyways, can anyone help me out here? Maybe a primer on how to calculate this stuff?
Big-O, Big-Theta and Big-Omega are all used for measuring the same thing - the asymptotical complexity. The exact meaning is:
f(n) in O(g(n)) - function f(n) is bounded from above asymptotically by function g
f(n) in Ω(g(n)) - function f(n) is bounded from below asymptotically by function g
f(n) in Θ(g(n)) - function f(n) is both in O(g(n)) and Ω(g(n))
Umm, shouldn't you add a constant in there ? Like, f(n) is O(g(n)) if there exists a constant C (not depending on the input data) such that f(n)<Cg(n) ?
Oh! Alright, this clears a lot of things up. I'd never heard of asymptotic analysis before this so I've got some work out ahead of me. Thanks guys! ^^;
f(n) is O(g(n)) if there exists a value x0 such that C(f,x)≤a*g(x)+b for all x>x0 where C(h,y) is the complexity of h at y and a/b are both elements of the set of the real numbers, and a is not 0.
I don't fully understand Big-O notation yet, but I practiced reading math-ish a while ago.