Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

unlinking old files

by damian (Beadle)
on Jul 21, 2000 at 10:26 UTC ( #23546=perlquestion: print w/replies, xml ) Need Help??

damian has asked for the wisdom of the Perl Monks concerning the following question:

hi fella monks, i know perl has a capability to check and remove files one day older. this will make it easier if i do not want to run cronjobs that will clean my old files on a specified directory. how can i do this, i have a cgi script and i want that everytime this script runs via web, it will check first for there are any files in a directory that is one day or more old files. if it finds old files it will unlink it if none continue the operation. thanks in advance.

Replies are listed 'Best First'.
Re: unlinking old files
by Corion (Patriarch) on Jul 21, 2000 at 12:02 UTC

    Most likely, you know that checking through all files each time your CGI script is run will be quite slow and you also know that you will most likely run into access permission problems, as your webserver runs under another user than you are and that user might not have the right to delete files. A cron job would be the cleanest solution in my opinion, but you don't want a cron job.

    Perl has some nice "functions" for checking file times, namely -M (age of file in days), -A (last access to file in days) and -C (change to inode).

    To use them in your code at startup, I would do more or less something like the (untested) following :

    use strict; use File::Find; my @files; # First we fill @files with the (full) path of # all files in the directory find( sub { push @files, $File::Find::name }, "/usr/home/foo/files/" ) +; # Now we check each file if it has become too old foreach (@files) { # If our file is older than one day, we will (try to) delete it if (-M > 1) { unlink $_; # no error checking here ... }; }; ... The rest of your script goes here ...

Re: unlinking old files
by athomason (Curate) on Jul 21, 2000 at 12:08 UTC
    A little more context would be useful, since cronjobs sound like the more appropriate solution here. Is there a reason you don't want to use them? Any code like this will put unnecessary load on your web server, since many redundant checks will be done. A cronjob would run the needed code exactly as often as would be needed. Just be sure you're using the right tool.

    In any case, I'll assume for now that you have a good reason. Here's some code that would do what you want:

    #!/usr/bin/perl -w use strict; use CGI::Carp; my $dir = "/tmp/wwwtrash"; opendir DIR, "$dir" or die "Couldn't open directory $dir: $!"; my @files = grep { (-f "$dir/$_") && (-M "$dir/$_" > 1) } readdir(DIR) +; closedir DIR; unlink @files or die "Couldn't unlink files in $dir: $!"; # rest of your script

    Update:

    As lindex pointed out, dying is bad CGI manners since it tends to put nothing useful in the logs or the user's browser. Add use CGI::Carp; after the strict pragma (done above) and eveything should be taken care of.

      Since its a cgi your might want use an error function instead of die
      to prevent an internal server error" and def use something other than
      die on the unlink, warn would be fine or just use the same error function.
      i.e. return the error rather than die with dump to stderr

      Just my input, ignorable :)



      lindex
      /****************************/ jason@gost.net, wh@ckz.org http://jason.gost.net /*****************************/

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://23546]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (2)
As of 2022-08-12 19:13 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?