The type of an integer literal depends on its value and notation. By default, decimal literals are signed whereas octal and hexadecimal literals can be either signed or unsigned types
Like what about 20 isn't that unsigned...like I know it can be signed but what if I put a unsigned?
A literal has a type, just like a variable. What that statement is saying is that the type of 20 is int, whereas the type of 0x14 could be either int or unsignedint - presumably, depending on the implementation.
By default unless otherwise specified, a literal with a decimal value, i.e. your example of 20, is signed. Signed just means that your value can be a positive or negative number. Unsigned makes it only able to be positive numbers (if you want to think of it this way, a negative number has a sign "-", so unsigned means it cannot carry a sign...).
I think this comes from the way that numbers are represented internally, given the most significant bit in binary is generally reserved for sign, but I could be wrong. If you're interested in that stuff, look up Two's Complement.
*EDIT: Now I think I'm mixing signed/unsigned variables vs. literals. Might wanna ignore me... :P
The relevant section in the standard is §2.14.2/2.
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented.
Table 6 — Types of integer constants
Suffix Decimal constants Octal or hexadecimal constant
none int int
long int unsigned int
long long int long int
unsigned long int
long long int
unsigned long long int
...