(double), or any other type between parantheses, is a type cast. It basically tells the compiler (or something else; doesn't matter) that the variable following it has to be interpreted as that type (here: a double).
This matters, because functions/operators can have different meanings on different variable types. In this case, integer division is something else entirely than floating point division. If RAND_MAX had not been casted to a double, the result would probably be 0*.
RAND_MAX is just a macro defined somewhere. It's the upper limit of the rand() function, thus rand() will generate a number between [0,RAND_MAX] (not sure if inclusive; doesn't matter).
*: 0, because rand() generates a number between [0,RAND_MAX]. Unless the result is RAND_MAX, dividing by RAND_MAX will always give a number < 1, which for integer division means 0.
The poblem deals with arithmetic of integer numbers. Let assume that you have two integer numbers
int a = 3;
int b = 4;
What will be the result of the operation a / b? As both numbers are integers the result will be 0. And how to get the result as a number with floating point as in considered example that it would be equal to 0.75? We need that arithmetic of numbers with floating point would be used. To do so we say the compiler that it need to consider one of the numbers as a number with floating point. In this case the compiler will use the arithmetic of numbers with floating point. The record ( double ) RAND_MAX says the compiler that it shall consider RAND_MAX as a number with floating point.
RAND_MAX is an integer constant that keep the maximum value that can be generated by function rand.