What page and what edition?
What Stroustrup is describing is in fact what is happening, but in my opinion it is careless, because he writes, "If Int is a signed integer (e.g. signed char), the numbers will suddenly turn negative ..." but doesn't clarify that this behavior is undefined.
https://en.cppreference.com/w/cpp/language/operator_arithmetic
When signed integer arithmetic operation overflows (the result does not fit in the result type), the behavior is undefined, — the possible manifestations of such an operation include:
it wraps around according to the rules of the representation (typically 2's complement),
it traps — on some platforms or due to compiler options (e.g. -ftrapv in GCC and Clang),
it saturates to minimal or maximal value (on many DSPs),
it is completely optimized out by the compiler. |
The last possibility is the most interesting one. Look at this example (assembly from godbolt) to understand:
1 2 3 4
|
bool is_max_value(signed char ch)
{
return (ch + 1 < ch);
}
|
https://godbolt.org/z/8xTb36
If signed char were guaranteed to wrap back around to -128, then an input of 127 would add 127 + 1, and then would become -128, and -128 < 127 would return true.
However, with optimization turned on, this function becomes:
1 2 3
|
is_max_value(signed char):
xor eax, eax
ret
|
(This just means "return false, always")
So the compiler optimized away the signed char integer overflow by assuming it can never happen.
PS: In your quote, the first sentence starts with, "
If Int [some user-defined type] is unsigned (e.g., unsigned char, unsigned int, or unsigned long long), the ++ is modulo arithmetic, ...". (The way you had it unqualified made it sound like the ++ operator is always 2^n modulo arithmetic.)