My problem is as follows:
A researcher has been given a grayscale image as a 2D array of floating point numbers.
In a grayscale image each pixel value represents the brightness of the pixel.
The image is encoded as a C++ array here:
double image[8][8] = {
{0.7, 0.8, 1.0, 0.5, 0.5, 0.0, 1.0, 0.5},
{0.7, 0.5, 1.0, 0.5, 0.6, 1.0, 0.8, 0.6},
{0.5, 0.1, 1.0, 0.1, 0.1, 1.0, 0.3, 0.7},
{0.2, 0.0, 1.0, 0.0, 0.7, 1.0, 0.0, 0.4},
{0.2, 1.0, 0.0, 0.2, 1.0, 0.2, 0.2, 0.0},
{0.6, 1.0, 0.1, 0.9, 0.4, 0.6, 1.0, 0.3},
{0.9, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 1.0},
{0.9, 0.0, 1.0, 0.1, 0.3, 0.3, 0.6, 0.7}
};
This researcher wishes to reduce this data to a simpler measurement, such as the average value of the pixels.
The Problem:
Write a program that prints the average pixel brightness of this array. Use two decimal places of precision.
_____________________________________________________________________________
The code I have created thus far is below. It is running, but I know that the answer for the average should be 0.57 and my program currently prints 0.01.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
|
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
int i = 0;
int j = 0;
double sum = 0;
double image[8][8] = {
{ 0.7, 0.8, 1.0, 0.5, 0.5, 0.0, 1.0, 0.5 },
{ 0.7, 0.5, 1.0, 0.5, 0.6, 1.0, 0.8, 0.6 },
{ 0.5, 0.1, 1.0, 0.1, 0.1, 1.0, 0.3, 0.7 },
{ 0.2, 0.0, 1.0, 0.0, 0.7, 1.0, 0.0, 0.4 },
{ 0.2, 1.0, 0.0, 0.2, 1.0, 0.2, 0.2, 0.0 },
{ 0.6, 1.0, 0.1, 0.9, 0.4, 0.6, 1.0, 0.3 },
{ 0.9, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 1.0 },
{ 0.9, 0.0, 1.0, 0.1, 0.3, 0.3, 0.6, 0.7 }
};
for (i = 0; i < image[i][j]; i++) {
for (j = 0; j < image[i][j]; j++) {
sum += image[i][j];
}
}
cout << fixed << setprecision(2) << (sum / 64) << endl;
system("pause");
return 0;
}
|