Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re: Up for Critique

by derby (Abbot)
on Mar 22, 2002 at 21:16 UTC ( #153654=note: print w/ replies, xml ) Need Help??


in reply to Up for Critique

For starters drop the while( $record = <INFILE>) and try the more idiomatic while( <INFILE> ) then use the implicit $_ instead of $record.

Use references to pass into and recieve back from your subroutines. Don't bother initializing them in the calling block:

@newrecord = (); # Send the record to have any blank lines removed. @newrecord = CleanData( $record ); -vs- $newrecord = CleanData( $record ); sub CleanData { my( $record ) = $_[0]; my ($cleanrecord, $line ); ### Separate the record by newlines. @$record = split /\n/, $record; ### Remove empty lines if they exist. foreach ( @$record ) { chomp; next if /^\s+$/; push @$cleanrecord, $_; } $cleanrecord; }

Also, take a look at your regexes. Try to reduce those to a more reasonable number (and slap the o option on the end to optimize them).

if ($line =~ m/CHROMOSOME:\s*(\d*)\s*$/) { $chrome_num = $1; } # Get TIGR model's reference id if ($line =~ m/MODEL_NAME:\s*(.*)$/) { $tigr_id = $1; } # Get the locus on the bac where the gene was cloned. if ($line =~ m/BAC_LOCUS:\s*(.*)\s*$/) { $bac_locus = $1; } .... might go better as if( $line =~ m/(.*?):\s+(\S+)\s+/o ) { $type = $1; $info = $2; } if( $type eq "CHROMOSONE" ) { # do whatever } elseif( $type eq "NUM_FOO ) { # do num foo }

You get the idea. Running multiple regexes against a single line can take forever. Try to reduce the regexes into simpler one(s) that will run once against the line and also ensure you use the 'o' flag so you're not spinning your wheels always compiling the regexes.

But I think all of those type changes are going to have a minimal impact. You really need to concentrate on your dbs performance and whats going on there. I'm not sure about mysql but some database will commit "transactions" so many inserts (does mysql/innodb table do that?). You can set up your drive to make that number larger and reduce the number of commits.

They're may be other stuff but that's just one person's opinion. It's kinda hard to do without being able to run the stuff. Have you tried profiling the script to see where you're spending all your time? At a minimum, you could liberally sprinke Benchmark timings before and after sections of the code and then use timediff to see how long the snippet took.

Good luck. ++ for an interesting post.

-derby

update: ferrency is right about the optimizations. sorry for the mislead.


Comment on Re: Up for Critique
Select or Download Code
Re: Re: Up for Critique
by ferrency (Deacon) on Mar 22, 2002 at 21:37 UTC
    derby wrote: and also ensure you use the 'o' flag so you're not spinning your wheels always compiling the regexes.

    AFAIK, the 'o' flag is only useful if your regex contains variables. If your regex is a constant at compile time, it will only be compiled once in any case.

    biograd coded:

    # Get reference number. if ($line =~ m/BAC:\s*(.*)\s*$/) { $ac_id = $1; }

    This regex may not do exactly what you want it to. Specifically, it's not going to remove trailing spaces from the line. The (.*) is greedy, and the following \s* allows a match of 0 whitespace characters, so the whitespace will always end up being matched in the ()'s.

    There are many ways to fix this, but the easiest might be to simply match (\S*) instead of the . which some people believe should be avoided at all costs. However, I like derby's solution, to replace all of the regex's with one more general regular expression.

    Update: Looking more closely at your code, your biggest bottleneck is almost definitely in the database and not in the perl. If you can avoid interspersing selects and inserts by keeping a copy of data you've already inserted, you'd be able to remove the table indices until it's built, and gain a lot of speed. Alternately, 26k records aren't really that many... you might consider preprocessing the records in memory into a list or hash, and then insert them all at once instead of processing the data in the order it is in the input file.

    Alan

      ferrency- I agree the biggest problem is probably the db calls. My techie fiancee also read your update and said he'd thought of that the whole time...

      If I get a chance to do a major re-work I will almost surely sort everything into memory first, then do a flurry of inserts/updates. I graduate this May, but I'm interested in getting this thing as fast as possible. I leave it in the hands of my advisor and his future grads to maintain and use (maybe as a chron-job, or whenever TIGR sends out an update) and I'm having pity on them. ;)

      -Jenn

Re: Re: Up for Critique
by biograd (Friar) on Mar 23, 2002 at 07:03 UTC
    Thanks for the ++, derby. I like the regex squashing you did.

    The MySQL dbms doesn't do transactions in the version we have. I think the newest version is incorporating them. Tables are of type "MYISAM".

    For any intersted persons...Version info:

    mysql Ver 10.12 Distrib 3.23.25-beta, for pc-linux-gnu (i686)

    -Jenn

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://153654]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (7)
As of 2014-08-23 16:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The best computer themed movie is:











    Results (174 votes), past polls