In the C Reference Manual, Denis Ritchie says, "...a character constant of length 1 has the code for the given characters in the low byte and 0 in the high order byte..."
What does this mean?
Not when talking about characters. When talking about multibyte values, the low order byte is the least significant byte. As an analogy with the decimal system, in the number "1024", "4" is the least significant digit.
For example, character literals such as 'a' are of type int in C and of type char in C++, which means that sizeof 'a' will generally give different results in the two languages: in C++ it will be 1 in C it will be sizeof(int) which on architectures with 8 bit wide char will be at least 2.
Characters (declared, and hereinafter called, char) are chosen from the ASCII set; they occupy the rightmost seven bits of an 8-bit byte. It is also possible to interpret chars as signed, 2’s complement 8-bit numbers.
It's talking about the the ASCII character set, not about characters themselves. ASCII only needs 7 bits.
the right most seven bytes of any 8-bit byte
Don't confuse bytes and bits. They're very different things.
To understand High/Low-order bytes/bits, consider int as an example. int is 4 bytes in length. The 1st byte is the low-order byte, whereas the 4th byte is the high-order byte. This is the same for bits. For example, still using the int example, the 1st bit is the low-order bit, whereas the 32nd bit is the high-order bit.
Note that a Nibble (or Nyble) is 4 bit's in length. 2 nibbles make a byte. The lower-order nibble is the first 4 bits. The last 4 bits is the higher-order nibble.