Looks like you've either combined several different methods of computing standard deviation, or have confused individual arith operations with summations.
The popular Standard Deviation formula is: squareroot of (Σ(x-average))/(count)
with x being each individual data point.
So, for each datapoint x, subtract the average from that, and sum up all those differences. Divide that sum by the total number of datapoints, take the squareroot of that, and you have your std dev.
You can also just divide by count-1 as opposed to count for a sample standard deviation. General rule of thumb, if you're calculating a standard deviation for some sample dataset of a large pool, use count-1. If you are calculating a standard deviation for a complete set of data, as opposed to only a sampling, then use count.
Alternatively, for easier programatic calculation the following formula could be used (to avoid having to retain all datapoints):
squareroot [(Σx2) - (Σx)2/(count * (count-1))]
Again, don't mix the individual operations with the summations. For the first sum, you square the datapoint, then add that to the sum. For the second sum, you sum up all the individual datapoints, then square that sum.
Then there's the fact that your current equation is sqrt((( count * pow(sum,2.0)) -(( 1.0/count) * (count *pow(sum,2.0)))) / (count)) only you never altered sum. You declare it as sum=0 and then never mention it again until the equation.