Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris

Perl 5.32.0 Too many open files ...

by skendric (Novice)
on Dec 02, 2020 at 13:54 UTC ( #11124519=perlquestion: print w/replies, xml ) Need Help??

skendric has asked for the wisdom of the Perl Monks concerning the following question:

I tried upgrading from 5.30.0 to 5.32.0 and hit the following with some of my scripts

Can't locate DateTime/TimeZone/Local/ /opt/local/lib/perl5/site_perl/5.32.0/DateTime/TimeZone/Local/ T oo many open files at /opt/local/lib/perl5/site_perl/5.32.0/Module/Run li ne 314 (#1) (F) You said to do (or require, or use) a file that couldn't be fo +und. Perl looks for the file in all the locations mentioned in @INC, un +less the file name included the full path to the file. Perhaps you nee +d to set the PERL5LIB or PERL5OPT environment variable to say where +the extra library is, or maybe the script needs to add the library nam +e to @INC. Or maybe you just misspelled the name of the file. See "require" in perlfunc and lib.

The file '/opt/local/lib/perl5/site_perl/5.32.0/DateTime/TimeZone/Local/' exists ... so I suppose that Perl isn't opening it because it has run out of ... available file handles? What is the name of the resource which has been exhausted and whose exhaustion leads to the 'too many open files' message?

Looking at .... Line #314 is the 'return scalar ...' line

sub require_module($) { # Localise %^H to work around [perl #68590], where the bug exi +sts # and this is a satisfactory workaround. The bug consists of # %^H state leaking into each required module, polluting the # module's lexical state. local %^H if _WORK_AROUND_HINT_LEAKAGE; if(_WORK_AROUND_BROKEN_MODULE_STATE) { my $notional_filename = &module_notional_filename; my $guard = bless([ $notional_filename ], "Module::Runtime::__GUARD__"); my $result = CORE::require($notional_filename); pop @$guard; return $result; } else { return scalar(CORE::require(&module_notional_filename) +); } }

What is 'sub module_notional_filename'? Well, contains the following:

[...] The notional filename for the named module is generated and returned. This filename is always in Unix style, with C</> directory separators and a C<.pm> suffix. This kind of filename can be used as an argument + to C<require>, and is the key that appears in C<%INC> to identify a modul +e, regardless of actual local filename syntax. =cut sub module_notional_filename($) { &check_module_name; my($name) = @_; $name =~ s!::!/!g; return $name.".pm"; } [...]

Hmm, I'm running into my own lack of expertise here ... I don't see how 'sub require_module' and 'sub module_notional_filename' relate to counting how many open files I have

Anyone have pointers on what I am seeing?

I rolled back to perl 5.30.0, which restores the original behavior (i.e. none of these 'Too many open files' messages

I am running on Ubuntu, 5.4.0-54-generic


Replies are listed 'Best First'.
Re: Perl 5.32.0 Too many open files ...
by Fletch (Chancellor) on Dec 02, 2020 at 14:51 UTC

    On *NIX there's a notion of a per-process limit for file descriptors (among other things). There's a command ulimit built into your shell which will let you manipulate these values. Depending on the shell the exact output will vary, but running ulimit -a should dump out what the current limit is (and show you what the specific option is for file descriptors; probably -n). You can bump that value up to whatever hard limit the OS has in place (displayed with something like ulimit -H -a).

    Edit: slightly related: if you ever needed to manipulate these directly from perl you can use the BSD::Resource module; your particular case though sounds like you're inheriting a small limit and you might want to bump things up in the shell you're running stuff from (child processen inherit their parent's limit settings and pass them on to their subprocessen).

    The cake is a lie.
    The cake is a lie.
    The cake is a lie.

Re: Perl 5.32.0 Too many open files ...
by eyepopslikeamosquito (Bishop) on Dec 03, 2020 at 06:42 UTC

    I suggest you try to reproduce the problem reliably with the shortest program you can. Run this program with both Perl 5.30.0 and Perl 5.32.0 while (in another terminal session) monitor the open files of the offending process. When you compare the open files output of running the same program with Perl 5.30.0 vs Perl 5.32.0, hopefully something will jump out at you, probably something unexpected (after all, "Too many open files" is a rare error in my experience, usually indicating something is horribly wrong with the system or I made a boo-boo).

    As for how to monitor which/how many files are open by a specific ubuntu linux process, you can google, as I did just now, finding this stack overflow question. Monitoring the details of open files on Linux can be tricky and requires root permissions for best results. The following Linux tools may be useful (examples can be found in the above stack overflow question):

    • lsof : fantastic tool for monitoring open files of running processes. Found some examples of using lsof at this blog.
    • The /proc file system in conjunction with the ps command, e.g. ls /proc/$pid/fd/. Examples can be found in earlier stack overflow link.
    • The ulimit command.
    • Update: System call tracing (e.g. strace). Run the perl process with system call tracing on (for an example, see linux security cookbook). Needs root permissions (update: not always, see Fletch response below). For best results run the perl process directly via the strace command (strace can also be used to attach to a running process). Comparing the strace output (via diff say) between Perl 5.30.0 and Perl 5.32.0 should give you a clue as to what is going on.

    HTH. I'm a bit rusty on all this stuff and don't have a test ubuntu system handy.

    Updated: Noted that root permissions are often not required with strace (thanks Fletch). Minor clarifications and improvements to wording.

      You can run strace on your own processes on most *NIXen (either by starting them using it as a prefix to the command you want to trace, or after the fact by giving it a PID of an existing process owned by your UID). You only need to be root to attach to a different user's process. If you're on OS X the strace-alike there (dtruss) does need elevated privileges.</nitpick>

      The cake is a lie.
      The cake is a lie.
      The cake is a lie.

Re: Perl 5.32.0 Too many open files ... (dont look inside modules ;)
by Anonymous Monk on Dec 03, 2020 at 01:02 UTC


    Don't look inside modules ;)

    the problem is in your code, you're not closing filehandles for some reason

    you report problems by posting the smallest possible program that REPRODUCES the bug

    $ perl -fe " while(++$i){ open my($fh),'<',1 or last; push @f, $fh; } + warn $i; require strict;" 2046 at -e line 1. Can't locate Too many open files at -e line 1.

    As you can see the problem is not in require or

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://11124519]
Front-paged by Corion
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others musing on the Monastery: (5)
As of 2021-01-25 17:28 GMT
Find Nodes?
    Voting Booth?