char can be implicitly cast to an int. It could be because the system automatically casts stuff to integers for the subtraction, etc so they are saving a cast or two perhaps (I don't know exactly how your system implements it though).
Yes, though unless the original data is in an int, the conversion from char to int will require additional instructions
to sign extend the value, so how much benefit there is isn't clear to me.
Not necessarily. An implementation is free to define char as signed or unsigned. As a matter of good practice, you should always configure your system to treat char as unsigned. Also, some architectures automatically handle resizing when pushing small values.
I thought it was because a char is a byte and an int is two bytes. It said something like that in the K&R book anyway. I just bought it, got it today :)
It was something to do with the sizes, I'll take a look...
Edit:
Page 16. Line 15 (excluding code, blank lines and the title ("A Tutorial Introduction")):
We must declare c to be a type big enough to hold any value that getchar returns. We can't use char since c must be big enough to hold EOF in addition to any possible char