Yes, the conversion is legal. However, I only said that chars are integers. I never said anything about what character any specific value represents or what ranges of values a char can hold.
Regarding the former, it's possible that a platform has 'a' mapped to 13 and 'b' mapped to 83. Making assumptions in this regard is not theoretically safe. Practically, the ASCII mapping is nearly universal. In ASCII, 'A' maps to 65, 'a' to 97, '0' to 48, and alphanumeric characters are sorted and contiguous by class.
If one wanted to not make assumptions about the mappings, character literals (e.g. 'a', '#', '\\', and so on) can be used, which the compiler translates to the appropriate integers for the platform.
As a mostly historic note, the competing code page for the sub-128 range is EBCDIC, a horrible abomination by IBM.
Regarding the latter, by definition char has to hold exactly as many different values as a byte can hold. This doesn't say much because a byte can technically be of any size. Some computers used to use 12-bit bytes. However, this is also mostly a theoretical concern, because I haven't heard of any modern computer with non-8-bit values. Meaning, a char can be reasonably safely assumed to be able to hold 256 values.
There's one other concern which isn't theoretical: the signedness of char. The standard Allows the implementation to decide whether 'char' means 'signed char' or 'unsigned char', meaning char could be a positive integer lower than 256, or it could be one in the range [-128;127], or even [-127;127] plus a negative zero
In other words, char(10) != 'j' with a high degree of certainty.
EDIT: On the other hand, casting from char to int is rarely necessary. If you need to perform operations on it, char admits them, since it is an integer:
1 2 3 4 5
|
char x=10;
x*=10;
x-=11;
x/=2;
x %=4;
|