Re: Does Perl have a baked-in memory llimit?
by eyepopslikeamosquito (Archbishop) on Nov 11, 2023 at 22:33 UTC
|
I'm trying to read in using File::Slurp. I get an out of memory error.
Don't use File::Slurp!
Just do it idiomatically without a module (my $string = do { local $/; <$fh> })
as described in more detail by haukex here.
See also:
Updated: extra references were added later
| [reply] [d/l] |
|
| [reply] |
|
Is that... Don't use File::Slurp for this use case or is it don't use File::Slurp EVER! ???
For me, it's ever ...
despite being good mates with File::Slurp's original author Uri Guttman. :)
Though mostly a matter of taste for individuals working alone,
dependencies are crucial when working in teams and especially for CPAN authors (such as the BOD :)
because the price of depending on simple convenience modules, such as File::Slurp, is way too high:
- What if your dependent module has a security vulnerability?
- What if the author abandons it?
- How quickly can you isolate/troubleshoot a bug in its code? (e.g. in a vital production system)
It's a different story for more complex domains, such as DBI and XML,
where it makes sense to leverage the work of experts in fields that you are not expert in.
More detail on this topic can be found in the Dependencies section at Writing Solid CPAN Modules.
| [reply] [d/l] [select] |
|
|
|
|
I don't use File::Slurp ever (but without the uppercase and the punctuation :-)
Leon's blog post explains (some of) the problems with File::Slurp and at the end offers three alternatives. I tend to use Path::Tiny in preference because it isn't overkill, is already widely used and does just about anything I am likely to want in regards to files and directories, including slurping and spewing. YMMV.
I should also say that for published code which only slurps one single file I would probably just do that without a module as the overhead isn't really worth it in terms of efficiency and maintenance.
| [reply] |
|
| [reply] |
|
|
Re: Does Perl have a baked-in memory llimit?
by dave_the_m (Monsignor) on Nov 11, 2023 at 21:31 UTC
|
Have you experimented with the buf_ref, scalar_ref and blk_size options to File::Slurp's read_file() method? In particular, with the default settings read_file() will keep reading 1Mb blocks from the file and appending it to a local variable. Each append may cause perl to reallocate the buffer used for the string, and depending on your OS's malloc() library, you may end up with allocated but not reused 1Mb, 2Mb, 3Mb, etc buffers. Try setting blk_size to the size of the file (possibly rounded up to a power of 2).
When read_file() returns the string, the return value gets copied (possibly twice). Perl these days is fairly good at sharing or stealing buffers when copying strings, but buf_ref or scalar_ref may still help, depending on the circumstances and your perl version.
Dave. | [reply] |
Re: Does Perl have a baked-in memory llimit?
by kevbot (Vicar) on Nov 11, 2023 at 17:18 UTC
|
| [reply] |
Re: Does Perl have a baked-in memory llimit?
by ikegami (Patriarch) on Nov 15, 2023 at 04:08 UTC
|
No.
What's the output of perl -V:archname? (That's an uppercase "V".) I suspect you are using an environment with a 4 GiB address space, a huge chunk of which is reserved by the OS. If so, switch to a 64-bit build.
| [reply] [d/l] |
Re: Does Perl have a baked-in memory llimit?
by NERDVANA (Priest) on Nov 15, 2023 at 11:40 UTC
|
Are you sure you're using a 64-bit perl, and not a 32-bit one left over from a previous system? a 32-bit perl could run out of ram pretty easily. | [reply] |
Re: Does Perl have a baked-in memory llimit?
by Anonymous Monk on Nov 12, 2023 at 17:48 UTC
|
As another attempt for rational explanation (silly thing without code sample and error message), you may have inadvertently put call to slurp into list context, like:
my $json = JSON->new->decode( read_file($file) );
which then tries to split 5e8 bytes to presumably quite short lines, and may consume a few of 1e9 bytes of RAM long before JSON complains about wrong usage.
| [reply] [d/l] |