Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re^3: The 10**21 Problem (Part 3)

by oiskuu (Friar)
on May 16, 2014 at 06:38 UTC ( #1086252=note: print w/ replies, xml ) Need Help??


in reply to Re^2: The 10**21 Problem (Part 3)
in thread The 10**21 Problem (Part 3)

Prefetching is tricky. I wouldn't try so many of them together. The number of outstanding requests is limited. How about this:

// candidates for vectorization, lets break them apart static inline void q7_to_m7(int m7[], int m6) { int q, m6x = m6 ^ 1; for (q = 0; q < 128; q += 2) { // unroll by 2 m7[q] = (m6 ^ q) * H_PRIME; m7[q+1] = (m6x ^ q) * H_PRIME; } } static inline void prefetch_m(unsigned int i) { _mm_prefetch(&bytevecM[i], _MM_HINT_T0); _mm_prefetch(&bytevecM[i^64], _MM_HINT_T0); } ... prefetch_m((m6^1) * H_PRIME); int m7arr[130]; q7_to_m7(m7arr, m6); // fixup for prefetching two iterations ahead m7arr[129] = m7arr[128] = m7arr[127]; m7arr[13] = m7arr[15]; m7arr[10] = m7arr[12]; prefetch_m(m7arr[2]); for (q7 = 1; q7 < 128; ++q7) { if (q7 == 10 || q7 == 13) continue; prefetch_m(m7arr[q7+2]); m7 = m7arr[q7]; ... }


Comment on Re^3: The 10**21 Problem (Part 3)
Download Code
Re^4: The 10**21 Problem (Part 3)
by eyepopslikeamosquito (Canon) on May 17, 2014 at 02:58 UTC

    I ran your code as is and it took 47 seconds, ten seconds slower. I then changed:

    _mm_prefetch(&bytevecM[i], _MM_HINT_T0); _mm_prefetch(&bytevecM[i^64], _MM_HINT_T0);
    back to my original:
    _mm_prefetch(&bytevecM[(unsigned int)(i) & 0xffffff80], _MM_HINT_T0); _mm_prefetch(&bytevecM[64+((unsigned int)(i) & 0xffffff80)], _MM_HINT_ +T0);
    and it ran in 38 seconds, only one second slower. Note that the &0xffffff80 aligns on a 64 byte boundary while ensuring we get the two 64 byte cache lines required for the inner loop.

    I profiled with VTune and both my (37 second) and your (38 second) solution showed up as having two (seven second) hotspots -- presumably due to memory latency -- in the same places, namely here:

    ; 100 : UNROLL(q8) 0x1400028e0 Block 178: 0x1400028e0 mov eax, r9d 7.217s 0x1400028e3 xor rax, rdi 0.060s 0x1400028e6 movzx r10d, byte ptr [rax+rsi*1] 0.100s 0x1400028eb test r10d, r10d 2.508s 0x1400028ee jz 0x140002a0b <Block 192>
    and here:
    ; 99 : for (q8 = 14; q8 < 128; ++q8) { 0x140002a0b Block 192: 0x140002a0b inc r9d 7.008s 0x140002a0e cmp r9d, 0x80 0.690s 0x140002a15 jl 0x1400028e0 <Block 178>

      Well, this is curious. Intel reference has this about prefetchtx:

      Fetches the line of data from memory that contains the byte specified with the source operand to a location in the cache hierarchy specified by locality hint.
      There's no need to align the prefetch pointer. Be sure to align the data records themselves, of course.

      I run a little pointer-chasing bench (on Nehalem). The optimum appears to be fetching ~16 links ahead. But this is just an empirical point. You could try increasing the prefetch distance.

      There's a LOAD_HIT_PRE event that indicates too-late prefetches, might try that. Also, it helps to see clocks together with UOPS_RETIRED (or INST_RETIRED), to see whether it does a lot of work or a lot of stalling. Branch mispredictions may also show up there.

      Update.

      One article gives these figures for Haswell: 10 line fill buffers, 16 outstanding L2 misses. Prefetch hints that can't be tracked are simply dropped. There are also hardware prefetcher units that detect "streams" of consecutive requests in same direction. So yes, the order of memory accesses (prefetches too?) can make a difference.

      Intel has some docs on tuning. Your loop could be improved in many ways, but don't get carried away. Figure out how you can lookup q8+q9 together, eliminating two inner loops.

        There's no need to align the prefetch pointer.
        Whoops, yes you are right. I made a silly mistake in my original test; with that blunder fixed, with your version, there is no difference in running speed (both run in 38 seconds).

        Curiously, my version runs in 37 seconds with:

        _mm_prefetch(&bytevecM[(unsigned int)m7 & 0xffffff80], _MM_HINT_T0); _mm_prefetch(&bytevecM[64+((unsigned int)m7 & 0xffffff80)], _MM_HINT_T +0);
        versus 40 seconds with:
        _mm_prefetch(&bytevecM[(unsigned int)m7], _MM_HINT_T0); _mm_prefetch(&bytevecM[(unsigned int)m7 ^ 64], _MM_HINT_T0);
        I have no explanation for that, unless perhaps the prefetcher likes to prefetch in order (?).

        Thanks for the other tips. Optimizing for the prefetcher seems to be something of a dark art -- if you know of any cool links on that, please let me know.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1086252]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (6)
As of 2014-12-28 11:39 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (180 votes), past polls