Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re: Noob could use advice on simplification & optimization

by temporal (Pilgrim)
on May 03, 2012 at 21:20 UTC ( #968813=note: print w/ replies, xml ) Need Help??


in reply to Noob could use advice on simplification & optimization

Looks like a pretty good start for a first big script. I did find some bugs in testing.

First thing that pops out is your options don't work - you are passing "numeric" into your validation function but checking against "number".

Also, something about how you save and load params is causing it to fail searches. I didn't investigate further that to check that the log file looks OK.

The search logic is a little wonky. Finds results and writes empty logfiles, doesn't find results when it should, etc. You're reinventing the wheel by writing a lot of error-prone logic. You could use regex over slurped files that File::Find's DFS digs up.

$/ = undef; open FILE, '<', $_; my $file = <FILE>; $/ = "\n"; close FILE; my @matches = $file =~ /((?:.*\n?){$lines_up}) (.*$pattern.*\n?) ((?:.*\n?){$lines_down})/x; my @parts = qw(lines_up matched_line lines_down); for (@matches) { if (! (my $index = $run++ % 3)) { print "match #" . (int($run / 3) + 1) . " in file\n"; } print $parts[$index] . ":\n$_\n"; }

Of course, it would probably be better practice not to load the entire file into memory...

An educational next step would be to run your search in another thread so that the GUI doesn't lock up while it's running the search.

All said, you've laid a nice foundation. Looks great!

Strange things are afoot at the Circle-K.


Comment on Re: Noob could use advice on simplification & optimization
Download Code
Re^2: Noob could use advice on simplification & optimization
by bgreg (Initiate) on May 03, 2012 at 22:20 UTC
    Thanks for the suggestions. I read the files to memory to speed up the searches, would this actually slow things down if I run the script on more memory restricted machines?

      When you run a recursive directory search and it encounters some large binary file or something in a hidden away subdirectory you're going to be loading quite a bit into memory. Where it will really get bad is when you get a file larger than your system's memory (or more accurately, the memory allocated to a Perl process).

      The easiest way to avoid this is to read the files line by line. But you could also write an smart read method which would buffer your reads in a limited length array, giving you something of the best of both worlds.

      The other advantage to slurping the file is you avoid splitting the file on newlines into an array.

      You might want to add some filename filtering so the user can exclude/include certain file types.

      Strange things are afoot at the Circle-K.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://968813]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others examining the Monastery: (16)
As of 2014-09-18 14:27 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (116 votes), past polls