I need help with these, and don't quite understand. Can someone please help shed some light.
1)
Assume an algorithm is O(N2) and it takes it 40 seconds to process a 100 elements. How long would it take it to process just 50 elements?
2)
Assume an algorithm is O(N2) and it takes it 10 seconds to process a given dataset. Can we assume that another algorithm that is also O(N2) will also take 10 seconds to process the same dataset?
To 1 if it's the same algorithm running on the same machine then you could probably assume that it just needs to be cut in half.
2) I'm not going to directly answer this but I will say that
O(N^2) is just a notation to say that all 'certain kinds' of functions belong to the set O(N^2) which means that all of these functions are growing at the same rate towards their upper bound because O is the upper bound.
So basically you could say that an algorithm who's time analysis is 0.015n^2+2n+4 belongs to the set O(n^2) and at the same time an algorithm who's time analysis is 1000000n^2+90n+15 also belongs to the set O(n^2). It means both of these algorithms belong to O(n^2).
Look at this how can I represent the number 1573 in base 10?
Well it's
1X10^3 + 5 * 10^2 + 7 * 10^1 + 3 * 10^0
1*1000 + 5*100 + 7*10 + 3*1
That works for any base
136 in base 7 is
1 * 7^2 + 3 * 7^1 + 6 * 7^0
If you do that math that will tell you the value of 1367 in decimal or base 10.
If I want to find the base 7 representation of 1573 then I can use the division algorithm
1573 = 7 * 224 + 5
224 = 7 * 32 + 0
32 = 7 * 4 + 4
4 = 7 * 0 + 4
Now you start at the bottom and go up using the remainders so that becomes
44057
For 1, I don't think it wouldn't take half the time, it would take a quarter, its O(N^2) not O(N2).
n = 100, s = 40;
n * n = 10,000
10,000 / s = 250 n per second
n = 50, s = ?
50 * 50 = 2,500
s = 2,500 / 250 n per second = 10;
so it would take 10 seconds.
as for converting binary to decimal, its really simple, binary starts from the right and moves left, if its 0 you ignore it, if its 1 then the 'value' of that 1 is 2^(x - 1), where x is the distance from 1 to the right-end of the binary number, all you have to do is add up all the values of the 1's.