The trivial solution is to use
std::oct
. See
http://www.cplusplus.com/reference/ios/oct/
That, however, is not a solution for you. You don't actually want to see any octal values or whatnot. You want to practice
logical thinking.
Why do you get a reverse output? Because you extract the least meaningful bits first.
What do you need to get the most meaningful bits first? The number of bits in the input.
How does one get that? With a loop. However, in your case you do know that the input is in range [1..256]. You do know/can calculate how many bits they have at most.
Or lets not say "bits" but "digits". Dec 256 = Oct 400. Three digits.
To get the first digit, first drop the other two with division.
To get the second, drop the last and use the modulo to drop the first digit.
To get the third, use the modulo.
1 2 3 4 5 6 7 8 9 10 11 12 13
|
#include <iostream>
int main()
{
for ( int input=1; input <= 256; ++input ) {
std::cout << std::dec << input << " = ";
std::cout << input / 64;
std::cout << ( input / 8 ) % 8;
std::cout << input % 8;
std::cout << " (" << std::oct << input << ")\n";
}
return 0;
}
|
That is not a generic solution; it would not work for 765'234'987. For unknown input you would first increase the divisor in a loop, until it is "big enough". Then, decrease the divisor in actual loop that prints one number.
What about those leading zero's in small numbers? I leave that for you to think about.