http://www.perlmonks.org?node_id=706994


in reply to Re: Multithreaded Script CPU Usage
in thread Multithreaded Script CPU Usage

I initially started off exactly the way you're saying, but file::find and stat() was just too slow. FileList is a lot faster (no clue why). I need the entire scan to finish in one weekend, and there can be up to 80 million files to scan. That's also why I went for multithreading and etc. Im trying to squeeze any juice I can out of the resources I have to get this script to run in its allotted window.

As for your one scanner per disk suggestion, I don't think starting 300 scanners is the answer here. :)

I know it's ugly, and part of me weeps somewhere deep inside while I write this stuff, but requirements are making me do it. :(

Replies are listed 'Best First'.
Re^3: Multithreaded Script CPU Usage
by NiJo (Friar) on Sep 01, 2008 at 21:05 UTC
    It took me some days off and rethinking to guess at your real bottleneck. Did you forget to tell us about the low bandwitdth remote windows network shares you are scanning? 300 local disks is too much to believe. Then collect the data locally into files and transfer them to the server. The parallel scanning should take about a minute for these file systems just above desktop size.

    But after solving all the scanning performance issues you are finally bound by the insert limit of your database. 80 million records are very huge. 1000 inserts per second would be my estimate for mysql on a desktop. That is 22 hours in your case. But do your own benchmark for multiple and faster CPUs and disks.

    Back to the drawing board you need to eliminate the overhead of the database. We don't know anything about use cases of your database output. When it is just finding a file once per week, I'd cat the local output files together and grep the 800 MB text file at disk speed in less than 10 seconds. In more advanced cases Perl regexps should have lower overhead than SQL on queries and no insert limit bottleneck.

      Yes, I did not say anything about the network shares, because I am confident they are not my bottleneck (they aren't low bandwidth). I've benchmarked scanning a 2000 file folder on a local hd vs one on one of the remote shares and the remote share took about .5sec longer on avg.

      The database, unfortunately is not the problem either. Getting the actual data into the DB can be done outside the weekend window. So I can just start LOADing the temp files after the scan completes. Though I'd like to do it concurrently if at all possible.

      I've recently tried running the DB and regexp part of the script on another machine and this did not solve the CPU issue. I've also tried simply disabling those parts of the script with the same effect.

      Currently I'm back at the original issue I created this post with, and looking for other ways to implement the multithreaded queue based behavior.

      Thank you though, for helping me think this through.