N(log) and stuff...

Hello there!

Everytime algorithms are discussed on this forum, people state its complexity with stuff like N*N and N(log) and stuff...
I read the article "algorithm complexity" but they didnt go deep into the subject...
So, could someone be so kind to explain me how to rate an algorithm like that, and zhat is the fastest?

thanks!

PS sorry if there are spelling mistakes, i wrote this on a qwerty while im used to azerty...
The time complexity expresses the order of magnitude of an algorithm execution time using as reference the size of the input given to the algorithm ( n )
Space complexity does the same for the memory used

Here is a simple algorithm ( pseudo code )
1
2
3
print_vector ( v )
    for each element x of v
        print x


It prints all the elements of the vector v
The time complexity of an algorithm is influenced by loops
If v has n elements, the above algorithm has a complexity of O(n) since the only loop there will iterate n times

1
2
3
4
5
print_vector_twice ( v )
    for each element x of v
        print x
    for each element x of v
        print x

The above algorithm will take twice as long as the previous one since the same loop is repeated twice
But you see the definitions of the Big-O notation you'll see that it has the same complexity as the algorithm above
( f(x) = O(g(x)) <=> f(x) < k*g(x) ∀ x > x0 )

The following algorithm has O(n2) complexity as each element is accessed about n*n times:
1
2
3
4
do_something ( v )
    for each element x of v // iterates n times
        for each other element y of v // iterates n-1 times
            do something with y // is executed n * n-1 ~= n2 
Forgive my ignorance, but just what kind of math is this: ( f(x) = O(g(x)) <=> f(x) < k*g(x) ∀ x > x0 ) ????

I'm just starting programming as a kind of serious hobby, and I'm learning a lot of stuff. I took general algebra and calculus in college, but I never saw something like this.
I have no idea what k means in that context, but it reads as:

f(x) = O(g(x)) evaluates to f(x) < k*g(x) for each x that's above the starting x.
f(x) = O(g(x)) <=> f(x) < k*g(x) ∀ x > x0

reads f(x) is in O(g(x)) if and only if f(x) is less than a fixed constant times f(x) for every x that's greater than a fixed x0
That is, for large values of x, f(x) will not grow faster than g(x)

It's standard mathematical notation ( using <=> instead of ⇔ )
Ok thanks guys !
hey Bazzy, i think we should try to explain this simply so they will understand it... i mean, i can imagine that people thinking "what the hell is that...!!" when they see those propositions... ( if some one is interested in that long expression, try learning logics, propositional algebra, etc... ex: http://en.wikipedia.org/wiki/Propositional_calculus)

for the algorithm complexity
read the book which i always recommend and my fav. book for algorithms..
Data Structures and Algorithms in C++ by Drozdek
http://www.amazon.com/Data-Structures-Algorithms-Adam-Drozdek/dp/0534491820

for simple reference look at this... and see if u understand it...
http://en.wikipedia.org/wiki/Big_O_notation

basic part of an analysis of an algorithm is trying to figure out the time Complexity.. BUT we have to consider the both Time and Space.... the most efficient algorithm will use less time and space to do the job..
for the simplicity for you i would give a little example,
imagine an array of N number of elements, and assume that all your program do is read and print each element in the array...
so if the N is variable from time to time, you can say that program reads the array N times.. which means the complexity of the algorithm relevent to N is N

again imagin if another program reads each number and adds it to the next number of the array and prints the result, (print "array[2] + array[3] = 5")
the program would have to read each element 2 times..
so for this program the complexity of the algorithm relevent to N is 2N, which we can again simplify to N because 2 is a fixed number( you will learn the reason why we do it if you read the resources i have shown you )

if another program reads the array N*N of times( N to the power of 2),
we say the complexity of the program is N*N
which means, its big-O is , O(n*n)..

normally we show these time complexities as O(.)
its called the big O notation...
Thanks a lot, chatura666. I'll look into it.
Topic archived. No new replies allowed.