It will be O(sqrt(n)). As the size of the input (the entire matrix) grows quadratically, the number of operations (how many cells are printed) grows linearly.
Arj0001, I think what Helios means is that it's more common to measure the complexity as a function of the total size of the input, i.e., the total number of items in the matrix. So if there are X items in the matrix (where X=n*n) then the complexity print one row is O(sqrt(X)).
This assumes that it takes constant time to access an element in the array, which is probably true.
...
if the array is N*N, the effort to print one row is N, of course.
However to clarify, big O is off the total, so lets reword this.
lets say the size of the array is Q*Q.
lets say the the N in big-O is Q*Q
then the complexity is sqrt(n) which is what most people expect to see.
you are mixing the N in the code and the N in the notation, they are not the same!
The analysis always assumes that the data is loaded into memory. That's why binary search is O(log(N)) instead of O(N) (the time required to read the data) or O(N*log(N)) (the time required to sort the data. Still, your point is valid if printing out a row is the only thing that the program will do.