http://www.perlmonks.org?node_id=11157795


in reply to RE on lines read from in-memory scalar is very slow

Just a summary, this hasn't been fixed yet.

There's three things contributing to this problem:

  1. when reading from file, sv_gets(), which is used to implement readline(), preallocates the SV to the size of the file handle's buffer, and for an in-memory file, the in-memory file is the buffer, so 16M in this case.
  2. on a match the regular expression engine makes a copy of the input string, and even in cases where perl normally wouldn't make a copy-on-write copy of the input string it does here
  3. Cygwin is very slow in updating the final byte of the large SV buffer (the PV), which holds the CoW reference count. This may be due to the way VirtualAlloc() is used to implement malloc on Cygwin.
1. means we get that huge allocation for the SV, 2. means we try to make it a CoW SV, and 3 produces the horrible performance. As an incidental effect, 2 making the SV CoW means if you save a copy of the SV:
my @save; while (<$fh>) { /successful (match with captures)/; # mark the SV as CoW push @save, $_; # extend the lifetime of the SV's PV until removed +from @save }

the memory use of the program explodes (my machine with 32G ran out of memory testing this).

I have a WIP fix for 2, but it unfortunately leaks in ASAN tests, so I haven't made a PR for it (it does prevent the performance problem)

Replies are listed 'Best First'.
Re^2: RE on lines read from in-memory scalar is very slow
by NERDVANA (Curate) on Feb 20, 2024 at 05:57 UTC
    The ideal internal behavior I would hope for is that the underlying scalar from which $fh was created would get copy-on-write substrings made from it so that the 16M allocation only exists once and everything else is a reference to it. Modifying one of those substrings would hopefully only copy the segment into a new buffer the size of the substring.

    How many obstacles are standing in the way of that behavior?

    (I've noticed on other occasions that making one of my APIs handle-friendly by wrapping optional plain scalars with a file handle and then writing the body of the code to read from handles has introduced a performance hit vs. writing the code to split /\n/ and process the strings directly. It would be really cool if they performed equally)

      The problem with what I think you're suggesting is readline/sv_gets() doesn't know about the SV behind a file handle created from a reference, it's just another file - one where the buffer isn't the normal size.

      As to the modifications to the SV the file handle refers to, making such a file handle doesn't copy the SV - changes to the SV are reflected in the values returned by reads from the handle, and writes to the handle are (obviously) reflected when reading from the SV. (changes to the SV have led to some bugs).

      Note that in the match case, on success perl makes a CoW copy of the whole matched SV, and the magic for $1, $& etc copies the appropriate text from from that CoW SV. That CoW copy is part of the problem here.

      As to the multiple 16M allocations, that's the other part of the bug here, sv_gets() is preallocating the target SV to way too large a size.

      For your performance issue, you could post here (on perlmonks, not necessarily this thread) with the code and performance numbers for discussion and/or make a ticket against perl itself on github. The main issue I can think of comparing readline() vs split is readline goes through more layers (pp_readline, sv_gets, PerlIO, PerlIO::scalar vs pp_split, regexp) which is going to slow things down. A test case can be investigated with a profiler (gprof, perf record, valgrind callgrind, cygwin profiler, visual studio profiler) to see if something is consuming more than the expected CPU time.

        I had forgotten about reflecting writes to the string, that would certainly nix the idea of using copy-on-write substrings.

        Does the regex engine *always* make a full clone of the target? Can't that itself be a copy-on-write? Maybe only if the match target was already copy-on-write?

        Maybe what I'm suggesting here is an optimization for sv_gets that kicks in when

        1. The file handle is backed by a scalar
        2. The scalar already has the CoW flag set on it, or scalar itself has a refcount of 1 (meaning that the app no longer has access to modify it)
        3. The file handle is read-only,
        4. The file handle has only simple layers on it like the default or :raw (maybe extend to support :utf8)
        then short-circuit around the regular behavior and return a CoW substring? As I describe it, its sounding like too much effort, but I'm still curious if it would work.
      "(I've noticed on other occasions that making one of my APIs handle-friendly by wrapping optional plain scalars with a file handle and then writing the body of the code to read from handles has introduced a performance hit vs. writing the code to split /\n/ and process the strings directly. It would be really cool if they performed equally)"
      Can you make a new thread explaining this? I don't understand.
        I'm just saying that I've run into cases where
        for (split /\n/, $longstring) { ... }

        ran faster than

        open my $fh, '<', \$longstring; while (<$fh>) { .... }

        In a perfect world, they would run the same speed (or at least really close). The second one is preferred any time there's a case that you want to run it on a huge file and don't want to load $longstring all into memory at once. The second code solves both cases, but if a majority of your cases are to have it already loaded in memory, then maybe you want to write it the first way for performance.

        (If I made a top-level post out of this I'd want to do all the benchmarks and different perl versions, and I'd end up researching the Perl source code and all that, which I don't have time for right now)