hey Bazzy, i think we should try to explain this simply so they will understand it... i mean, i can imagine that people thinking "what the hell is that...!!" when they see those propositions... ( if some one is interested in that long expression, try learning logics, propositional algebra, etc... ex:
http://en.wikipedia.org/wiki/Propositional_calculus)
for the algorithm complexity
read the book which i always recommend and my fav. book for algorithms..
Data Structures and Algorithms in C++ by Drozdek
http://www.amazon.com/Data-Structures-Algorithms-Adam-Drozdek/dp/0534491820
for simple reference look at this... and see if u understand it...
http://en.wikipedia.org/wiki/Big_O_notation
basic part of an analysis of an algorithm is trying to figure out the time Complexity.. BUT we have to consider the both
Time and Space.... the most efficient algorithm will use less time and space to do the job..
for the simplicity for you i would give a little example,
imagine an array of
N number of elements, and assume that all your program do is read and print each element in the array...
so if the
N is variable from time to time, you can say that program reads the array
N times.. which means the complexity of the algorithm relevent to
N is
N
again imagin if another program reads each number and adds it to the next number of the array and prints the result, (print "array[2] + array[3] = 5")
the program would have to read each element 2 times..
so for this program the complexity of the algorithm relevent to N is 2N, which we can again simplify to N because 2 is a fixed number( you will learn the reason why we do it if you read the resources i have shown you )
if another program reads the array N*N of times( N to the power of 2),
we say the complexity of the program is N*N
which means, its big-O is , O(n*n)..
normally we show these time complexities as O(.)
its called the big O notation...