Since c and 100 are integers you are doing integer math, meaning there are no fractional parts. To force floating point math at least one of the variables must be a floating point number. In this case you could either cast c to a float or use a float constant 100.0f.
You are doing integer division, and integers truncate towards zero. When you divide c by 100, you get 0.27, but because "c" is an integer, the decimals are truncated off and you're left with 0. Change "c" to a float or double and you should get your decimal. You can also divide it by "100.0" instead of 100, and you should get your decimals.
6.5.5 Multiplicative operators
6. When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded.88) If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a.
In other words integer division simply removes the fractional part (a.k.a.
truncation toward zero
).
So to get around this in your code:
cast either c or 100 before division to a float/double type,
change the type of c to float/double, or
change 100 to 100.0 which is a double constant instead of an integer constant