Hello, first post.
I have a problem that I want to try to solve with a genetic algorithm.
I decided to try to learn to do this using c++, with no significant recent experience.
I am running Ubuntu 10.10, with gcc version 4.4.3, and codeblocks 10.05svn7671-1.
Hm, why are you using define macro for this? Anyways,
Try changing it to ((float)rand()/(float)(RAND_MAX+1))
RAND_MAX is an int macro with the maximum value of rand(), so you need to cast it to a float as well.
EDIT:
I'd also recommend seeding this, otherwise you'll get the same series of numbers each time
The problem is that RAND_MAX == INT_MAX with your compiler so when you add 1 you get an overflow so what you have is (float)rand()/(INT_MIN).
Casting before adding avoids the overflow rand()/((float)RAND_MAX + 1) but the precision of float is so bad that adding one doesn't make a difference in this case so you can just write (float)rand()/RAND_MAX
It's not a really good practice to define functions this way but I move on.
I think the problem is that RAND_MAX is defined as the bigger number rand() can return and by adding 1 to it it turns to negative (at least in my implementation). So basically every float is negative and of course lesser than 0.5 so bits always adds 0s. Delete +1 in the above code and it should produce 1s also.
Thank you for the replies. Again, this is not my code, but something I found on the net to help me learn.
@eypros: What would be best practice for this? My understanding is the #define tag is a preprocessor comment, so it isn't really a function. One can't create a variable named RANDOM_NUM, or the number will always be the same.
Why would a compiler accept RAND_MAX + 1 and not make it a negative value?
Define macros are merely a text to text conversion. It's similar to using a const. Better practice would be just to either make it function, or write it out of you only use it once