Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re^2: Scoping question - will file handle be closed?

by BrowserUk (Patriarch)
on Jul 14, 2015 at 19:37 UTC ( [id://1134788]=note: print w/replies, xml ) Need Help??


in reply to Re: Scoping question - will file handle be closed?
in thread Scoping question - will file handle be closed?

However, this automatic close does not check for errors, so it is better to explicitly close filehandles, especially those used for writing

Besides knowing it, I always wondered what you're meant to do if you detect an error when closing a file?


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!

Replies are listed 'Best First'.
Re^3: Scoping question - will file handle be closed?
by Laurent_R (Canon) on Jul 14, 2015 at 20:31 UTC
    You're right, I also don't know. Most of the time, I do explicitly close my file handles, even if it is not necessary (at least, it is self-documenting), but I don't ever check whether closing the file was successful. Quite often, I close a bunch of files in one line such as:
    close $_ for qw/$IN1 $IN2 $OUT1 $OUT2/;
    (Except that this is just an example, I am usually trying to give more useful names to my file handles.)
Re^3: Scoping question - will file handle be closed?
by Monk::Thomas (Friar) on Jul 14, 2015 at 20:10 UTC
    Warn the user that there's something wrong. The most likely cause is Perl failed to write the data to disk.
      Warn the user that there's something wrong.

      Okay. But "Something's wrong!" isn't very useful to the user.

      According to POSIX, close can fail with one of 3 errors:

      1. EBADF: fd isn't a valid open file descriptor.

        Hopefully, if this can occur in the application, it'll be detected/sorted before the user gets his hands in it.

      2. EINTR: The close() call was interrupted by a signal; see signal(7).

        Ditto. If this is a possibility, a signal handler will have been installed to handle it.

      3. EIO: An I/O error occurred.

        And we're back to "Something's wrong!".

      And then, the man page says:"A successful close does not guarantee that the data has been successfully saved to disk, as the kernel defers writes.", so even a successful close doesn't guarantee good data.

      So we've told the user, "Something's wrong!, but not what. And even if we haven't told him, something could still be wrong. So where does that leave him?

      He could decide to re-run the process to recreate the file; but what if it (whatever "it" is) happens again?

      And, it could be he does so unnecessarily, because it was only the close that failed and not the preceding writes.

      And what can he do about the failures that happen after we've closed the file; due to caching?

      In the end, the only arbiter of whether the file is good is whether the next use of that file is successful.

      So, the only sure way to detect the types of problems that might be indicated by close failing, is to ensure that the next process in the chain; even if that is a human being reading the file, can adequately detect that the file is incomplete or otherwise corrupted. Perhaps a sentinel record ("The end") or an known file size.

      And whatever mechanism is put in place for the next process to detect the error; negates any purpose in detecting and warning that "Something's wrong!";

      Did I miss a possibility?


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
      I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!

        Perl close() is not POSIX close(). Some less-bizarre reasons that Perl close() might fail include:

        ENOSPC
        A previous write to disk failed because the disk partition ran out of space (and you had not cleared errors for that handle by seek()ing since then).
        EPIPE
        You had unflushed data and close() tried to flush it and all of the readers at the other end of your pipe/socket have already closed/shutdown their end. Or a previous write to that handle failed for that reason (and you probably got a SIGPIPE earlier).
        EIO
        Your device has decided to fail. This is more specific than "something is wrong". You should arrange to remember what you had opened so that you can mention that in the "close() failed" error message so the user (or their admin) can determine which device to check.
        ECONNRESET
        Your socket connection got reset (since the last time you cleared errors by seek()ing).
        ENOMEM
        The system wanted to allocate more buffer space but the call to allocate that failed (possible at least for sockets).

        Plus there are errors that can originate from some I/O layer. You'd have to consult the documentation and/or code for whatever layers you might use for more information on that. Perhaps you can use an encoding layer configured to complain about untranslatable characters via an error return. Or a "decompress" layer might complain about invalid input.

        It can be useful to check for close failing because it may be important to not continue on after close() failed because the data you were writing or reading is incomplete and so you shouldn't submit it or shouldn't mark the data source as "done" or whatever.

        It is usually easier to check for close() failing than to check every single I/O operation for failure. If using buffered I/O, then checking every single I/O operation wouldn't be sufficient anyway.

        If I have no logical recovery step for "incomplete data", then I will just warn when close fails.

        - tye        

        Okay. But "Something's wrong!" isn't very useful to the user.

        close $fh or croak("E: Unable to close file handle for $file: $!");

        This usually results in a nice explanation what went wrong.

        And, it could be he does so unnecessarily, because it was only the close that failed and not the preceding writes.

        This also catched me by surprise. The 'print {$fh}' did return a success, it was only apparent when closing the file handle, that Perl was actually unable to write.

        And what can he do about the failures that happen after we've closed the file; due to caching?

        Out of scope. As least as far as Perl is concerned. (And I think there is no chance you can truly validate that the data has been written to disk. At least as long as you do not have an exact knowledge about the storage architecture. And even then it may still be lying.)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1134788]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (5)
As of 2024-03-28 16:23 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found