I'm having a very difficult time grasping the concept of runtime analysis/complexities and how it relates to Big-Oh.
For example, I need to find the (tight) asymptotic running time for the function below as a function of n
1 2 3 4 5 6 7 8 9
int foo (int a[], int n, int y)
{
int i, j, x = 0;
for (i = 0; i < n; i++){
for (j = 0; j < n && x < y; j++){
x += a[j];
}
}
}
So, the first thing to do is to determine the atomic operations, and the assignment statements give a runtime of O(1). The first for-loop gives a runtime of O(N), and the second for-loop gives O(M) which totals to O(N^2). But why?
My textbook hardly provides any details about runtime complexities, and I've been searching for any good sources online, but couldn't find something that I would understand. Could anyone please direct me to a great source for understanding Big-Oh, Big-Ohms, Big-Theta, and Runtime Complexities? Or, maybe walkthrough me on this example so I could understand a little better?
Outer loop is linear (loops dataset size) = O(N)*O(Inner)
Inner loop is linear too = O(N)*O(loop content)
Loop content contains constant time operation: O(1)
O(N)*O(N)*O(1) = O(N2). This is generic time compexity of algorithm.
It also has O(1) memory complexity, as it uses costant amount of memory to work (i,j,x variables)
If we would specify some constraints on array values for example, we might find changes to expected complexity as value of y will come into play.