I'm trying to get some benchmarking results for two versions of a program I've made. A heuristic method is executed over a large amount of data sets. However, I've noticed that the first set always takes a lot longer than the others. I've heard of something called Dynamic Frequency Scaling or something like it and googling it gave me ways to disable it, but that's not really something I want to do. I'd much rather find a way to warm up the processors before the first dataset is handled.
Are you performing disk I/O to obtain these data sets? If so, the system is probably caching the file or part of it to memory, which is what's causing the speed up during subsequent runs.