Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Win32: Setting a layer with binmode causes problem with close() on Windows

by rovf (Priest)
on Jun 17, 2013 at 09:11 UTC ( #1039307=perlquestion: print w/replies, xml ) Need Help??
rovf has asked for the wisdom of the Perl Monks concerning the following question:

I have an application running on Windows 7 with Perl 5.14, where a file is opened for reading at one place, but the layer is set at a different place using binmode. This has the weird effect that the file can not be deleted, after it is closed, while the process owning the handle is still running. My feeling it that close doesn't work properly, although it doesn't return an error.

Here is a small, complete program which demonstrates the problem:

use strict; use warnings; # Some file is created by a different process my $file='ttz'; system("cmd /c echo xx >$file"); # In our application, this file is created by a process # on Unix. That's why we read it with the Unix layer. # We do NOT set the layer during the open call. my $layer=':unix'; open(my $fh,$file) or die "open error $!"; # In our application, a different function is responsible # for setting the layer. Therefore, we use binmode to set # it. Nothing had been read from the file so far, so this # should be fine. binmode($fh,$layer); # Now close the file ... close($fh) or die "close error $!"; # ... and try to delete it. if(!unlink($file)) { # this is line 19 warn $!; system("cmd /c del $file"); }
Running this program on my PC shows the following output:

Permission denied at fcstest1.pl line 19. I:\tmp\ttx The process cannot access the file because it is being used by another + process.
-- 
Ronald Fischer <ynnor@mm.st>

Replies are listed 'Best First'.
Re: Win32: Setting a layer with binmode causes problem with close() on Windows (PerlIO silently fails to close the file)
by BrowserUk (Pope) on Jun 17, 2013 at 10:39 UTC

    First up. PerlIO layers are definitely a part of this problem. Commenting out the binmode makes it go away (as you already know>).

    But, it is (much) more complicated than that. At the point where the unlink fails, (at least) two processes are hanging on to handles to that file:

    C:\test>junk44 Permission denied : The process cannot access the file because it is b +eing used by another process at C:\test\junk44.pl line 30. perl.exe pid: 17320 PB-IM2525-AIO\HomeAdmin : 60: File (RW-) C:\ +test\ttz cmd.exe pid: 17324 PB-IM2525-AIO\HomeAdmin : 60: File (RW-) C:\t +est\ttz handle.exe pid: 16152 PB-IM2525-AIO\HomeAdmin : 60: File (RW-) C +:\test\ttz
    1. perl.exe is the one running the script.
    2. handle.exe is the process that is doing this discovery.
    3. cmd.exe is (one of) the shell that was used to run the echo command to create the file.

      Further muddying the waters here is your prefixing the command you want to run with 'cmd /c'.

      Because the system code detects that you are using a shell metachar '>' in the command, it automatically prefixes the command you supply with 'cmd.exe /x/d/c'.

      So the actual command being run is:

      cmd.exe /x/d/c "cmd /c echo xx >$file"

      Doing away with that doesn't fix the problem, but it makes it less complex.

    Also, running the command to create the file from within the script is confusing things and there is no need for it.

    This simplified version of your script:

    use strict; use warnings; my $file='ttz'; open( my $fh, $file ) or die "open error $!"; binmode( $fh, ':unix'); close($fh) or die "close error $!"; if( !unlink($file) ) { warn $!, ' : ', $^E; }

    exhibits exactly the same behaviour when the file is pre-created:

    ## In a different session from which I will run my modified version of + your script C:\test>echo xx > ttz C:\test>handle | find "ttz" ## shows that immediately after creation, + nothing has an open handle to that file ## Now in the other session C:\test>junk44 Permission denied : The process cannot access the file because it is b +eing used by another process at C:\test\junk44.pl line 12. ## And back in the first session whilst the 10 second sleep is running C:\test>handle | find "ttz" 60: File (RW-) C:\test\ttz

    Only one process has a handle to the file, and that process is Perl itself.

    (Tentative) Conclusion: The error message is wrong, or at least, misleading. The "other process" that is preventing the unlink, is actually the same process that is trying to perform the unlink.

    Essentially, the close has failed (silently), or has simply not been enacted, and so the unlink cannot proceed because there is an open handle to the file.

    Tracking this further means delving into IO layers ... why did the close fail silently?


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Good analysis, but I think you have it wrong in one point: At the time when the unlink fails, no other processes are running which have a hold on the file: The one which had created the file is not running anymore (since system waits for the child process to finish), and - just for completion - the process deleting the file has not started yet.

      BTW, Both system calls don't exist in my original code in this way (in my application, the file is created on a Unix host asynchroniously, and read and deleted from the Windows process). I have introduced them in the example for the following reason:

      • I wanted to create the file by a separate process, to make sure that my Perl program "has not seen" this file before, to make the situation more similar to my original application.
      • After the unlink fails, I added an explicit cmd /c del..., because the error message was then clearer than what was stored on $!. In hindsight, I probably could have output $^E instead

      -- 
      Ronald Fischer <ynnor@mm.st>
        I think you have it wrong in one point: At the time when the unlink fails, no other processes are running which have a hold on the file:

        Here's the problem. You know the way you have to fork twice under *nix in order to deamonise a process -- the first fork inherits loads of handles (stdin, stdout, stderr etc.) from its parent; so you then close them and fork again to get a process that true independent of its parent -- well similar things can happen under windows.

        system starts a new process that inherits lots of stuff from it parent. When it dies, if the parent is still around, many of those shared (duped) handles have to be retained within the kernel -- waiting for all their duplicates to be marked for delete -- and despite that the process has been removed from the system scheduler, those retained, open, shared handles will still be attributed to that now defunct process. So, the fact that system has returned does not mean all of it resources have been cleaned up.

        My simplified version of the test script simply removes all of those possibilities and demonstrates that the only process that could have a handle to the file is the perl process itself. Which is then verified using an external tool (handle.exe).

        Thus it is the close that is failing silently.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Win32: Setting a layer with binmode causes problem with close() on Windows
by syphilis (Chancellor) on Jun 17, 2013 at 10:10 UTC
    Does a simple binmode($fh); fail to do the right thing ?

    Cheers,
    Rob
      I read again PerlIO. My mistake was certainly to regard :unix and :crlf as two alternative I/O layers, one doing the lineending translation in the Unix style (i.e. no translation necessary), and the other one in the Windows style. This is clearly wrong: :crlf is to be seen on top of </c>:unix</c>, the latter being the most elementary stlye.

      Indeed, just omitting binmode works; I can read both kinds of files on Windows.

      Now another, related question comes to my mind. How about creating files? When I want to create on Windows a file, which has Unix line endings, should I then

      • Pop the :crlf layer, or
      • Explicitly set the :raw layer, or
      • Just apply binmode without any layer
      , since just setting the layer to :unix shouldn't work either, for the same reason that it was nonsense when trying to read an Unix file on Windows. But which of these variants are reliably working, i.e. without nasty side effects which maybe come up much later? I guess all three of them are correct, but I'm not sure, and anyway, which one would you consider the preferable one?

      -- 
      Ronald Fischer <ynnor@mm.st>
        :) more options :)PerlIO::eol - PerlIO layer for normalizing line endings
        Text::FixEOL - Canonicalizes text to a specified EOL/EOF convention, repairing any 'mixed' usages
      :) but what is it supposed to do? I add
      print join ' ', 1, PerlIO::get_layers($fh), "\n"; binmode($fh, ':raw:perlio') or warn "UHOH $! \n$^E \n$@\n "; print join ' ', 2, PerlIO::get_layers($fh), "\n"; binmode($fh, ':raw') or warn "UHOH $! \n$^E \n$@\n "; print join ' ', 3, PerlIO::get_layers($fh), "\n"; binmode($fh, ':raw:raw:raw:raw') or warn "UHOH $! \n$^E \n$@\n "; print join ' ', 4, PerlIO::get_layers($fh), "\n"; binmode($fh, ':raw:raw:raw:raw:unix:crlf') or warn "UHOH $! \n$^E \n$@ +\n "; print join ' ', 5, PerlIO::get_layers($fh), "\n"; for(1..3){ binmode($fh, ':raw:pop') or warn "UHOH $! \n$^E \n$@\n "; print join ' ', 6, PerlIO::get_layers($fh), "\n"; } #~ binmode($fh, ':raw:crlf:perlio') or warn "UHOH $! \n$^E \n$@\n "; binmode($fh, ':pop:pop:win32') or warn "UHOH $! \n$^E \n$@\n "; print join ' ', 6, PerlIO::get_layers($fh), "\n";
      and I get
      1 unix crlf 2 unix crlf perlio 3 unix crlf perlio 4 unix crlf perlio 5 unix crlf perlio unix crlf 6 unix crlf perlio unix 6 unix crlf perlio 6 unix crlf 6 win32
      in addition to the file-already-open

      if turning off unix and turning it on again, or turning it on twice, is wrong, perl should warn or die

      PerlIO seems thin, OTOH, the test suite lists a TODO #56644: PerlIO resource leaks on open() and then :pop in :unix and :stdio but its closed

      Any way you look at it there is nonsense around :)

Re: Win32: Setting a layer with binmode causes problem with close() on Windows (nonsense)
by Anonymous Monk on Jun 17, 2013 at 09:38 UTC
    FWIW, it already has :unix, you shouldn't use it twice
    $ set PERLIO_DEBUG=the-binmodeless.txt $ perl fudge $ set PERLIO_DEBUG=the-binmode.txt $ perl fudge binmode

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1039307]
Approved by marto
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (4)
As of 2017-10-21 04:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    My fridge is mostly full of:

















    Results (269 votes). Check out past polls.

    Notices?