Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?

File::Find question

by FireBird34 (Pilgrim)
on Feb 23, 2003 at 04:50 UTC ( #237854=perlquestion: print w/replies, xml ) Need Help??
FireBird34 has asked for the wisdom of the Perl Monks concerning the following question:

Needed to do some mass deleting, and someone suggested this module. I have never used it before, so I'm comming here for some advice on it :) Anyway, this is what I need: Given multiple subdirectories in /path/to/directory, and in each subdirectory there are numerous files, what would be the best way to delete specified files ending in a given extension?

So if you missed that, as a simplified scenerio:


I would basically need *.b to be deleted, leaving the *.a and *.c in tact (NOTE: inside dir1, there might be another subdirectory, etc etc) Is there an easy way to get around this? Thanks for any input.

Replies are listed 'Best First'.
Re: File::Find question
by pfaut (Priest) on Feb 23, 2003 at 05:29 UTC

    File::Find is a perl implementation of the unix find utility. It is useful for cases when you need to find files to process with perl.

    Since what you want to do is delete files which you can do with the unix rm utility, why not just use find and rm? To delete all files named *.b from under /path/to/directory, use this:

    find /path/to/directory -name '*.b' -exec rm {} \;

    If you need to delete these files as part of processing being done by an existing perl script, then File::Find is the answer. Something like this (untested):

    use File::Find; find(sub { $_ =~ /\.b\z/ && unlink $_ }, '/path/to/directory' );
    --- print map { my ($m)=1<<hex($_)&11?' ':''; $m.=substr('AHJPacehklnorstu',hex($_),1) } split //,'2fde0abe76c36c914586c';
      One of my first memories with Perl is reading about find2perl. I had no idea that you could translate shell to perl so easily. Anyway, using find2perl is very easy and demonstrates how to use File::Find.
      [michael]$ find2perl /path/to/directory -name '*.b' -exec rm {} \;
      (mostly copied and pasted into the shell) which prints a complete File::Find based Perl script ready to run. It's a little verbose, so I've removed some of the less important lines for the sake of clarity.
      #! /usr/bin/perl -w use strict; use File::Find (); sub wanted { /^.*\.b\z/s && (unlink($_) || warn "$name: $!\n"); } # Traverse desired filesystems File::Find::find({wanted => \&wanted}, '/path/to/directory'); exit;
      -- - all things inbetween.
Re: File::Find question
by graff (Chancellor) on Feb 23, 2003 at 06:07 UTC
    Since you are planning to obliterate files using tools that are unfamiliar to you, it should go without saying that you should be very careful and thorough about error checking -- in the sense that, before you actually do an "unlink" call or something similar, you have tested for failures and unintended/unexpected outcomes.

    This isn't really all that hard to do -- one way would be to divorce the File::Find part of the task from the "unlink" part; just build a list of files to be removed, and take a few moments to actually review that list yourself, before you throw in the "unlink" call. (Or, just save that list to a file, and once you've checked it, pass it to a separate one-line script to actually remove the files in the list.)

    914 suggested some good earlier discussions. I know of another, which is maybe not directly relevant, but demonstrates how the lack of error checking, combined with lack of understanding about File::Find, can be quite risky: Here's the seeker's post and the explanation of the real problem.

Re: File::Find question
by toma (Vicar) on Feb 23, 2003 at 08:13 UTC
    Certainly File::Find could work for your problem.

    For me File::List is easier, though.

    use File::List; my $dir= "/home/toma/perl/examples/filelist"; my $search = new File::List($dir); # \ needed to make . an ordinary character in the regex my @files = @{ $search->find('\.b$') }; foreach my $filename (@files) { if (-f $filename and -w $filename) { print "Can probably remove $filename\n"; # Uncomment when this seems to work: # unlink $filename or die "Can't unlink $filename"; } }

    Note: This code has not been tested. Test before using. Use at own risk. Be careful not to unlink a directory!

    It should work perfectly the first time! - toma

Re: File::Find question
by 914 (Pilgrim) on Feb 23, 2003 at 05:31 UTC
    i've asked some (somewhat) related questions, and gotten great answers here and here

    You could start there... and i'm sure there's help to be found in the File::Find documentation.

Re: File::Find question
by Anonymous Monk on Feb 23, 2003 at 05:06 UTC
    Your first instinct should be to read the manual, so go ahead and do that now.

    Your second instinct should be to look for tutorials on using the module, as in Tutorials.

    Then you should try writing some code.

    The best way to delete files is to just delete them. I'll bet in the time it took you to write up this post you could've figured out how to use File::Find to do what you want.

Re: File::Find question
by graq (Curate) on Feb 23, 2003 at 08:50 UTC

    Well, you asked a simple question and got some useful Perl answers. I would like to add a small way-of-thinking that is not restricted to Perl.

    Deleting is permanent, renaming is not. Think about your task amd what could go wrong. Is your data important enough to make a backup? Would it be easy to, instead of deleting files on the fly, moving them to a designated directory? These files can then be manually checked, manually deleted or bulk deleted at set periods if it is a regular job.

    Then again, maybe you just want to delete them :)

    <a href="">Graq</a>

Re: File::Find question
by Aristotle (Chancellor) on Feb 24, 2003 at 12:48 UTC
    This is decidedly easier in shell than Perl.
    $ find /foo/bar -type f -name '*.baz' -print0 | xargs -0 rm

    The find(1) variant using -exec some others posted works as well, but spawns a new rm(1) process for every single file, whereas xargs(1) will only spawn a new one after the command line argument buffer has filled up, which usually translates to one rm(1) per several hundred files depending on your shell and system.

    (Note the use of the -print0 and -0 parameters to use nullbytes as filename separators - that secures you against filenames with spaces, newlines or other exotic characters in their name.)

    All that said and done, the File::Find::Rule module makes this pretty much just as easy in Perl:

    use File::Find::Rule; unlink File::Find::Rule->file()->name('*.baz')->in('/foo/bar/');

    Makeshifts last the longest.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://237854]
Approved by pfaut
[Discipulus]: yes. the central thing is the power. itself
[Discipulus]: s/some/new/
[choroba]: politics in the CB again?
[Discipulus]: was not philosopy choroba? ;=)

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (5)
As of 2017-04-29 22:13 GMT
Find Nodes?
    Voting Booth?
    I'm a fool:

    Results (534 votes). Check out past polls.