There's no need to align the prefetch pointer.
Whoops, yes you are right. I made a silly mistake in my original test; with that blunder fixed, with your version, there is no difference in running speed (both run in 38 seconds).
Curiously, my version runs in 37 seconds with:
_mm_prefetch(&bytevecM[(unsigned int)m7 & 0xffffff80], _MM_HINT_T0);
_mm_prefetch(&bytevecM[64+((unsigned int)m7 & 0xffffff80)], _MM_HINT_T
+0);
versus 40 seconds with:
_mm_prefetch(&bytevecM[(unsigned int)m7], _MM_HINT_T0);
_mm_prefetch(&bytevecM[(unsigned int)m7 ^ 64], _MM_HINT_T0);
I have no explanation for that, unless perhaps the prefetcher likes to prefetch in order (?).
Thanks for the other tips. Optimizing for the prefetcher seems
to be something of a dark art -- if you know of any cool links
on that, please let me know.