![]() |
|
Perl: the Markov chain saw | |
PerlMonks |
Re: Speeding up large file processingby gryphon (Abbot) |
on Jul 13, 2005 at 02:31 UTC ( #474443=note: print w/replies, xml ) | Need Help?? |
Greetings ralphch, Wow. Well, not knowing the current state of your code-line, I'll just give general pointers. Take a look at FastCGI for starters. Ultimately, a mod_perl solution would be better, but it may be more difficult to migrate your code. Also, if I were you, I'd look into moving the data from large text files into a database. MySQL is my database of choice, but there are alternatives that are free, good, and fast. My general rule of thumb is not to spend time (or money) on different or additional hardware if there's a fairly straight-forward software solution to improve efficiency. gryphon
In Section
Seekers of Perl Wisdom
|
|