What I mean is that each run of the same program will see a different outcome around the mean. As you know the variance (or the standard deviation, if you like) is a measure of the spread of the actual results around the mean. The mean and variance of different versions of the same program will tell you whether these different versions are faster or not, but perhaps the spread between different versions is less than the spread within each version and then the differences between the mean times each version ran are not really significant.
CountZero
A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James