http://www.perlmonks.org?node_id=1014335


in reply to Re: Finding and sorting files in massive file directory
in thread Finding and sorting files in massive file directory

>You don't make it clear whether all these files are in a single directory Yes, they are. So basically this reads one file into memory at a time? Do you know how to deal with zipping and tarring each file up in that case?
  • Comment on Re^2: Finding and sorting files in massive file directory

Replies are listed 'Best First'.
Re^3: Finding and sorting files in massive file directory
by dave_the_m (Monsignor) on Jan 20, 2013 at 22:41 UTC
    So basically this reads one file into memory at a time?
    No, it reads one filename at a time into memory.
    Do you know how to deal with zipping and tarring each file up in that case?
    Well I've already shown you the easy way to compress individual files with the external gzip command. If you want to combine multiple file into a single tar file (possibly compressed), you're going to have to be more specific: how many files approximately are you wanting to put in a single tar file? All 2 million of them? Or just a select few? And do you want to delete the files afterwards?

    You are likely to need to use the Archive::Tar module.

    Dave

      Hi,

      I'd like to be able to combine the distinct file types into single compressed tar files. There are approx 5 types, split roughly c. 1 million for one type and the other 4 c. 300k - 500k each.

      Yes, the files need to be deleted afterwards to make disk space for incoming.

      Colin

        If there is a process adding new files to the directory while your "tar up" script is running, then you need to face the twin issues of deleting files which haven't been put in the tar file, and putting empty or half-written files into the tarball. If possible, you need to be able to stop the process from adding any new files while the script is running; but if you can't, then the following should be safe.

        Use the script I gave you above to, for example, move all files starting with 'b' into a b/ subdirectory. Then wait a few minutes, or however long it could reasonably take for the process to finish writing the current file, then from the command line, simply:

        $ tar -cfz .../some-path/b.tar.gz b/ $ tar -tfz .../some-path/b.tar.gz > /tmp/foo View /tmp/foo in a text editor to see if it looks reasonable, then $ rm -rf b/
        If the rm fails due to too many files, then write another perl script similar to the one above, but using 'unlink' to remove each file one by one.

        Dave.