No insults for ignorance needed. magicalblender is correct that the binary representation has some significance. The easy answer is that when you have an operation between a signed int and an usigned int, the int value is converted to an unsigned int. (It's one of those 'implicit conversions' you'll need to watch out for.) Since the result of your subtraction is a negative value, how should the computer handle it? Well, the answer to that is a little harder to understand. Essentially the computer is returning the 'two's complement' of the original unsigned int. Try multiplying b by a negative one and you'll see that it produces the same result. I don't really understand very much about it, other than to say it's a method computers can use to represent negative numbers. I also don't know why it was implemented this way. It would make more sense to me if the compiler just returned an error saying that it's an illegal action. If you want to read more information on two's complement, wikipedia has an article on it here:
The best advice I can give is to be very careful about when you use unsigned int. If you use them in applications where they may be part of a function that produces a negative result, you'll end up with results that are very hard to predict.