http://www.perlmonks.org?node_id=1053025


in reply to Data structures in Perl. A C programmer's perspective.

This node falls below the community's threshold of quality. You may see it by logging in.

Replies are listed 'Best First'.
Re^2: Data structures in Perl. A C programmer's perspective.
by BrowserUk (Patriarch) on Sep 09, 2013 at 15:55 UTC
    When people talk about linked lists “storing data all over the place,” what they are really concerned about is locality of reference, i.e. minimizing the probability of a virtual-memory page fault ... which involves stopping the process in its tracks, doing a physical I/O to retrieve the page from a spinning disk,

    Downvoted: Locality of reference has exactly zero to do with page faults; zero to do with physical IO; and zero to do with spinning disk.

    Your whole post is total misinformation.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
Re^2: Data structures in Perl. A C programmer's perspective.
by eyepopslikeamosquito (Archbishop) on Sep 09, 2013 at 18:59 UTC

    When people talk about linked lists “storing data all over the place,” what they are really concerned about is locality of reference, i.e. minimizing the probability of a virtual-memory page fault
    No. The linked list performance problem discussed in this thread was not caused by virtual-memory page faults, but by CPU cache misses.

Re^2: Data structures in Perl. A C programmer's perspective.
by code-ninja (Scribe) on Sep 09, 2013 at 17:33 UTC
      reading the article on Locality of Reference on wiki... I guess sundialsvc4 has a point.

      No, he doesn't.

      In the context of the discussion in this thread -- the effects of locality of reference on the performance of arrays or vectors versus linked-lists -- the salient part of the wiki article is:

      Typical memory hierarchy (access times and cache sizes are approximations of typical values used as of 2013 for the purpose of discussion; actual values and actual numbers of levels in the hierarchy vary): CPU registers (8-256 registers) – immediate access, with the speed of the inner most core of the processor

    • L1 CPU caches (32 KiB to 512 KiB) – fast access, with the speed of the inner most memory bus owned exclusively by each cores
    • L2 CPU caches (128 KiB to 24 MiB) – slightly slower access, with the speed of the memory bus shared between twins of cores
    • L3 CPU caches (2 MiB to 32 MiB) – even slower access, with the speed of the memory bus shared between even more cores of the same processor

      Main physical memory (RAM) (256 MiB to 64 GiB) – slow access, the speed of which is limited by the spatial distances and general hardware interfaces between the processor and the memory modules on the motherboard

    • Since people don't appear to have bothered to watch the full video I linked above, here is the salient part of it (7:46). It'd be worth 8 minutes of anyone's time to watch it.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.