Beefy Boxes and Bandwidth Generously Provided by pair Networks DiBona
Keep It Simple, Stupid
 
PerlMonks  

Re: Multithreaded Script CPU Usage

by NiJo (Friar)
on Aug 26, 2008 at 19:53 UTC ( #706962=note: print w/ replies, xml ) Need Help??


in reply to Multithreaded Script CPU Usage

I see that your approach comes from automating interactive tools. I'm sure that perl can do a lot simpler, better and faster.

A disk (with one disk head) is basically a single threaded application. Trying to keep 10 threads happy results in a lot of seeks and very slow performance. Sometimes it is OK to have 2 threads: one waiting for a disk seek and the other parsing a different file.

I'd redesign the application to be pure perl and single threaded. If you need to scan several disks, I'd start one scanner per disk and directly feed into the database using DBI.

The following "code" is just for showing the perl way to do it with prebuild modules. It is mostly copied from perldoc, won't run out of the box and has no error handling. I could not find a good way to get author information from MS office files. Win32::OLE and some digging into MS Explorer OLE should help.

use strict; use warnings; use File::Find; use DBI; $dsn = "DBI:mysql:database=$database;host=$hostname;port=$port"; $dbh = DBI->connect($dsn, $user, $password); $sth = $dbh->prepare("INSERT INTO table(foo,bar,baz) VALUES (?,?,?)"); sub wanted { my @stats = stat $_; $sth->execute( $foo, $bar, $baz ); } find(\&wanted, @directories_to_search);
P.S: Basically you have reinvented the unix 'locate' tool and ignored Microsofts indexing files for faster search.


Comment on Re: Multithreaded Script CPU Usage
Download Code
Re^2: Multithreaded Script CPU Usage
by Zenshai (Sexton) on Aug 26, 2008 at 22:13 UTC
    I initially started off exactly the way you're saying, but file::find and stat() was just too slow. FileList is a lot faster (no clue why). I need the entire scan to finish in one weekend, and there can be up to 80 million files to scan. That's also why I went for multithreading and etc. Im trying to squeeze any juice I can out of the resources I have to get this script to run in its allotted window.

    As for your one scanner per disk suggestion, I don't think starting 300 scanners is the answer here. :)

    I know it's ugly, and part of me weeps somewhere deep inside while I write this stuff, but requirements are making me do it. :(
      It took me some days off and rethinking to guess at your real bottleneck. Did you forget to tell us about the low bandwitdth remote windows network shares you are scanning? 300 local disks is too much to believe. Then collect the data locally into files and transfer them to the server. The parallel scanning should take about a minute for these file systems just above desktop size.

      But after solving all the scanning performance issues you are finally bound by the insert limit of your database. 80 million records are very huge. 1000 inserts per second would be my estimate for mysql on a desktop. That is 22 hours in your case. But do your own benchmark for multiple and faster CPUs and disks.

      Back to the drawing board you need to eliminate the overhead of the database. We don't know anything about use cases of your database output. When it is just finding a file once per week, I'd cat the local output files together and grep the 800 MB text file at disk speed in less than 10 seconds. In more advanced cases Perl regexps should have lower overhead than SQL on queries and no insert limit bottleneck.

        Yes, I did not say anything about the network shares, because I am confident they are not my bottleneck (they aren't low bandwidth). I've benchmarked scanning a 2000 file folder on a local hd vs one on one of the remote shares and the remote share took about .5sec longer on avg.

        The database, unfortunately is not the problem either. Getting the actual data into the DB can be done outside the weekend window. So I can just start LOADing the temp files after the scan completes. Though I'd like to do it concurrently if at all possible.

        I've recently tried running the DB and regexp part of the script on another machine and this did not solve the CPU issue. I've also tried simply disabling those parts of the script with the same effect.

        Currently I'm back at the original issue I created this post with, and looking for other ways to implement the multithreaded queue based behavior.

        Thank you though, for helping me think this through.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://706962]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (8)
As of 2014-04-23 22:05 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    April first is:







    Results (556 votes), past polls