Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Re^2: Noob could use advice on simplification & optimization

by bgreg (Initiate)
on May 03, 2012 at 22:20 UTC ( [id://968820]=note: print w/replies, xml ) Need Help??


in reply to Re: Noob could use advice on simplification & optimization
in thread Noob could use advice on simplification & optimization

Thanks for the suggestions. I read the files to memory to speed up the searches, would this actually slow things down if I run the script on more memory restricted machines?
  • Comment on Re^2: Noob could use advice on simplification & optimization

Replies are listed 'Best First'.
Re^3: Noob could use advice on simplification & optimization
by temporal (Pilgrim) on May 04, 2012 at 14:25 UTC

    When you run a recursive directory search and it encounters some large binary file or something in a hidden away subdirectory you're going to be loading quite a bit into memory. Where it will really get bad is when you get a file larger than your system's memory (or more accurately, the memory allocated to a Perl process).

    The easiest way to avoid this is to read the files line by line. But you could also write an smart read method which would buffer your reads in a limited length array, giving you something of the best of both worlds.

    The other advantage to slurping the file is you avoid splitting the file on newlines into an array.

    You might want to add some filename filtering so the user can exclude/include certain file types.

    Strange things are afoot at the Circle-K.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://968820]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others avoiding work at the Monastery: (4)
As of 2024-04-24 12:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found