I am creating a new big data program that may or may not become very large. All data is stored in the form of a very large number of "Codon" classes, each of which has a number in it that I know will not require double precision. Is it good practice to just use double until a profiler tells me I should use a float, or should I just use a float? My program will never reach a size bigger than available physical memory, so what am I getting out of using float over double?