Historically, long int was supposed to be longer than int.
"Longer" refers to how much memory is used.
The more memory is used, the higher the maximum values that can be stored.
Nowadays, to my knowledge, in Windows operating systems they are set to use the same amount of memory, which is 32 bits.
On Linux 32-bit (aka x86), they both use 32 bits.
On Linux 64-bit (aka x86-64), int uses 32 bits while long int uses 64 bits.
Different integer types can hold different ranges of numbers.
For example, a signed char can hold values from -128 to 127.
Whereas an int can hold values from -2147483648 to 2147483647.
A long int is supposed to be bigger.
Here's the rub -- it might not be. C and C++ are notoriously vague about the actual sizes of its integer data types. The Standard only imposes a minimum range of values on them.
Currently, a lot of systems have an int and a long int as both 32-bit values. Alas.
If you fall back to old compilers like Borland C++ 3.1 or Turbo C++ for DOS you'll find that int is 16-bit and long is 32-bit.
Programming in C for 10 years I always felt the whole matter is somewhat misleading and makes program not quite portable. Later I switched to Java where these things are more strict. :)