Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re^2: RE on lines read from in-memory scalar is very slow

by NERDVANA (Curate)
on Feb 20, 2024 at 05:57 UTC ( [id://11157802]=note: print w/replies, xml ) Need Help??


in reply to Re: RE on lines read from in-memory scalar is very slow
in thread RE on lines read from in-memory scalar is very slow

The ideal internal behavior I would hope for is that the underlying scalar from which $fh was created would get copy-on-write substrings made from it so that the 16M allocation only exists once and everything else is a reference to it. Modifying one of those substrings would hopefully only copy the segment into a new buffer the size of the substring.

How many obstacles are standing in the way of that behavior?

(I've noticed on other occasions that making one of my APIs handle-friendly by wrapping optional plain scalars with a file handle and then writing the body of the code to read from handles has introduced a performance hit vs. writing the code to split /\n/ and process the strings directly. It would be really cool if they performed equally)

Replies are listed 'Best First'.
Re^3: RE on lines read from in-memory scalar is very slow
by tonyc (Friar) on Feb 20, 2024 at 22:57 UTC

    The problem with what I think you're suggesting is readline/sv_gets() doesn't know about the SV behind a file handle created from a reference, it's just another file - one where the buffer isn't the normal size.

    As to the modifications to the SV the file handle refers to, making such a file handle doesn't copy the SV - changes to the SV are reflected in the values returned by reads from the handle, and writes to the handle are (obviously) reflected when reading from the SV. (changes to the SV have led to some bugs).

    Note that in the match case, on success perl makes a CoW copy of the whole matched SV, and the magic for $1, $& etc copies the appropriate text from from that CoW SV. That CoW copy is part of the problem here.

    As to the multiple 16M allocations, that's the other part of the bug here, sv_gets() is preallocating the target SV to way too large a size.

    For your performance issue, you could post here (on perlmonks, not necessarily this thread) with the code and performance numbers for discussion and/or make a ticket against perl itself on github. The main issue I can think of comparing readline() vs split is readline goes through more layers (pp_readline, sv_gets, PerlIO, PerlIO::scalar vs pp_split, regexp) which is going to slow things down. A test case can be investigated with a profiler (gprof, perf record, valgrind callgrind, cygwin profiler, visual studio profiler) to see if something is consuming more than the expected CPU time.

      I had forgotten about reflecting writes to the string, that would certainly nix the idea of using copy-on-write substrings.

      Does the regex engine *always* make a full clone of the target? Can't that itself be a copy-on-write? Maybe only if the match target was already copy-on-write?

      Maybe what I'm suggesting here is an optimization for sv_gets that kicks in when

      1. The file handle is backed by a scalar
      2. The scalar already has the CoW flag set on it, or scalar itself has a refcount of 1 (meaning that the app no longer has access to modify it)
      3. The file handle is read-only,
      4. The file handle has only simple layers on it like the default or :raw (maybe extend to support :utf8)
      then short-circuit around the regular behavior and return a CoW substring? As I describe it, its sounding like too much effort, but I'm still curious if it would work.

        The regex engine currently always makes a CoW copy (ie doesn't copy the string itself) of the matched string.

        There's no CoW substrings - perl's PVs are stored with a trailing NUL so they're useful as a C-style string, and perl itself and XS code depends on that, so I don't think it can be implemented in any case.

Re^3: RE on lines read from in-memory scalar is very slow
by Danny (Pilgrim) on Feb 20, 2024 at 10:30 UTC
    "(I've noticed on other occasions that making one of my APIs handle-friendly by wrapping optional plain scalars with a file handle and then writing the body of the code to read from handles has introduced a performance hit vs. writing the code to split /\n/ and process the strings directly. It would be really cool if they performed equally)"
    Can you make a new thread explaining this? I don't understand.
      I'm just saying that I've run into cases where
      for (split /\n/, $longstring) { ... }

      ran faster than

      open my $fh, '<', \$longstring; while (<$fh>) { .... }

      In a perfect world, they would run the same speed (or at least really close). The second one is preferred any time there's a case that you want to run it on a huge file and don't want to load $longstring all into memory at once. The second code solves both cases, but if a majority of your cases are to have it already loaded in memory, then maybe you want to write it the first way for performance.

      (If I made a top-level post out of this I'd want to do all the benchmarks and different perl versions, and I'd end up researching the Perl source code and all that, which I don't have time for right now)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11157802]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others rifling through the Monastery: (5)
As of 2024-06-12 14:34 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?
    erzuuli‥ 🛈The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.