The result of converting a negative number string into an unsigned integer was specified to produce zero until C++17, although some implementations followed the protocol of std::strtoull which negates in the target type, giving LLONG_MAX for "-1", and so produce the largest value of the target type instead. As of C++17, strictly following std::strtoull is the correct behavior.
If the minus sign was part of the input sequence, the numeric value calculated from the sequence of digits is negated as if by unary minus in the result type, which applies unsigned integer wraparound rules.
Seems to be a bit murky. If I'm reading the cpp-reference right then Clang is doing what it was supposed to do pre-c++17 and not doing what it's supposed to do post c++17! But I haven't got a copy of Clang and I could well be misinterpreting cpp-reference.
Well, no. It clearly says that pre-C++17 n should be zero. But Clang is not setting n to zero (or maybe it is, I didn't check), it's making the conversion fail.
I ran my test program on Clang 3.8.0 on rextester.com. It set n to 0 AND set the failbit to true (i.e. screwed the stream). Direct assignment to N gives largest unsigned as before.
So I don't think Clang is working as per the standard either side of C++11.
But feeding a negative number into an unsigned int isn't something I do by choice!