Overflows are normally not detected because it would be too costly to insert checks everywhere to check for it.
For a simple case like this the compiler should be able to spot the problem. My compiler says "warning: overflow in implicit constant conversion" when I compile your code. If you're using GCC or Clang you should at least use the -Wall -pedantic compiler flags which turns on this and many other useful warnings.
In many situations it is impossible to detect overflows at compile time but there are tools that you can use. If you're using GCC or Clang you can use -fsanitize=undefined which will insert checks for overflows and other things that cause undefined behaviour. The code will run a bit slower though so you probably don't want to use it in the final release build.
1 2 3 4 5 6 7 8 9
#include <iostream>
#include <limits>
int main()
{
int a = std::numeric_limits<int>::max();
int b = a + 1;
std::cout << b << '\n';
}
Output from above program when using GCC with the -fsanitize=undefined flag.
runtime error: signed integer overflow: 2147483647 + 1 cannot be represented in type 'int'
-2147483648
> In line 2, why not have overflow messages, while 2147483648 > 2147483647?
There are two mechanisms to avoid this:
a. Use auto to let the compiler choose the appropriate integer type.
b. Place the initialiser within braces to generate an error if there is a narrowing conversion.
Unsigned integers doesn't really overflow in the same sense that signed integers do.
Signed integer overflow has undefined behaviour in C++.
For unsigned integers the behaviour is well-defined. -1 converted to an unsigned integer type is guaranteed to produce the largest possible value for that unsigned type. There is a lot of code out there that relies on this behaviour.