Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re^2: system "gzip $full_name" is taking more time

by dushyant (Acolyte)
on Dec 07, 2013 at 15:41 UTC ( [id://1066129]=note: print w/replies, xml ) Need Help??


in reply to Re: system "gzip $full_name" is taking more time
in thread system "gzip $full_name" is taking more time

I Tried using IO::Compress::Gzip but it is not replacing original file but creating another compressed file. So I have to make another system call for removing original file. And this will not improve the performance.

I Cant create one compressed file for all. I have to follow some rules and standard in my production environment.

I am thinking to run 5-6 copies of same script. Dividing 120 directories between them.

  • Comment on Re^2: system "gzip $full_name" is taking more time

Replies are listed 'Best First'.
Re^3: system "gzip $full_name" is taking more time
by Happy-the-monk (Canon) on Dec 07, 2013 at 17:57 UTC

    So I have to make another system call for removing original file

    You mention that you are aware of perl's unlink.

    But better use the move command from File::Copy that ships with every perl.

    Cheers, Sören

    Créateur des bugs mobiles - let loose once, run everywhere.
    (hooked on the Perl Programming language)

Re^3: system "gzip $full_name" is taking more time
by fishmonger (Chaplain) on Dec 07, 2013 at 16:09 UTC

    There are other similar modules to choose from and I'm sure at least one of them would be able to replace the original file. I have not looked over the related modules in any detail, so I can't say which one would be better suited for your needs.

    What do you think would be a reasonable amount of time to compress an averaged size file from one of the directories from the command line? Would you say that a tenth of a second would be reasonable? Multiple that time by 90,000 and that will give you a very rough estimate of the required time per directory. Assuming a tenth of a second average, you'd be looking at more than 2 hours per directory and that's not including the overhead of executing the system function/call.

    Having an average of 90,000 files per dir seems to be a major factor in the overall problem. Can you rework your policies to work on 30 day intervals rather than 90 day intervals?

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1066129]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (4)
As of 2024-04-25 12:23 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found