>You don't make it clear whether all these files are in a single directory
Yes, they are.
So basically this reads one file into memory at a time? Do you know how to deal with zipping and tarring each file up in that case?
Re^2: Finding and sorting files in massive file directory
Replies are listed 'Best First'.
So basically this reads one file into memory at a time?
No, it reads one filename at a time into memory.
Do you know how to deal with zipping and tarring each file up in that case?
Well I've already shown you the easy way to compress individual files with the external gzip command. If you want to combine multiple file into a single tar file (possibly compressed), you're going to have to be more specific: how many files approximately are you wanting to put in a single tar file? All 2 million of them? Or just a select few? And do you want to delete the files afterwards?
You are likely to need to use the Archive::Tar module.
If there is a process adding new files to the directory while your "tar up" script is running, then you need to face the twin issues of deleting files which haven't been put in the tar file, and putting empty or half-written files into the tarball. If possible, you need to be able to stop the process from adding any new files while the script is running; but if you can't, then the following should be safe.
Use the script I gave you above to, for example, move all files starting with 'b' into a b/ subdirectory. Then wait a few minutes, or however long it could reasonably take for the process to finish writing the current file, then from the command line, simply:
$ tar -cfz .../some-path/b.tar.gz b/
$ tar -tfz .../some-path/b.tar.gz > /tmp/foo
View /tmp/foo in a text editor to see if it looks reasonable, then
$ rm -rf b/