Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re^2: randomising file order returned by File::Find

by jeffa (Chancellor)
on Mar 01, 2011 at 19:20 UTC ( #890800=note: print w/ replies, xml ) Need Help??


in reply to Re: randomising file order returned by File::Find
in thread randomising file order returned by File::Find

"... [script] builds a big list in memory and then partitions the matching files into 100+ lists (1 per cluster instance) and writes the to separate files."

This is pretty much what Hadoop does for you.

jeffa

L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)


Comment on Re^2: randomising file order returned by File::Find
Select or Download Code
Re^3: randomising file order returned by File::Find
by BrowserUk (Pope) on Mar 01, 2011 at 22:23 UTC

    The downside of that mechanism is control. If, as the OP says later, the need to suspend or terminate the processing early arises, then you're stuck with starting the whole process over from scratch. Same thing if the number of workers varies up or down.

    With server/clients approach, pause and restart the clients, or knock out half the clients--or double them--and the processing continues without duplication and automatically redistributes to accommodate the changes.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      "... then you're stuck with starting the whole process over from scratch."

      True ... but Hadoop scales linearly, meaning what used to take multiple hours or days to run now only takes a few hours, maybe even a few minutes. Such termination becomes trivial. I do not know how familiar you are with Hadoop/cloud computing.

      jeffa

      L-LL-L--L-LL-L--L-LL-L--
      -R--R-RR-R--R-RR-R--R-RR
      B--B--B--B--B--B--B--B--
      H---H---H---H---H---H---
      (the triplet paradiddle with high-hat)
      
        True ... but Hadoop scales linearly, meaning what used to take multiple hours or days to run now only takes a few hours, maybe even a few minutes.

        So does the server/clients scheme. The difference is in the level of control.

        Such termination becomes trivial.

        For some types of processing. For other types, the cost of throwing away the results of a job when it is 99% complete and starting over can be very high.

        I do not know how familiar you are with Hadoop/cloud computing.

        Not so much. But it isn't so different with stuff I was doing 15 years ago on a server farm.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re^3: randomising file order returned by File::Find
by DrHyde (Prior) on Mar 02, 2011 at 10:31 UTC

    Trouble with this is that it doesn't make the best use of your hardware if you have machines that run at different speeds or if some data files take longer to process than others.

    When I was trying to solve a similar problem (in my case, rendering individual frames of video), using whatever spare cycles were available across a whole bunch of machines (so different amounts of CPU were available on different boxes and at different times) my solution was for the individual renderers to request work units from a master, and rather than just mounting the master's filesystem and hoping for the best, they made a request to my own application. My application was a simple perl script that they accessed over telnet. The script was only working on its local filesystem so locking worked reliably, and simply told each client the filename that it should next work on. The clients then grabbed that file using NFS.

    That's what I think you should do rather than randomising the list - randomising will reduce the problem, but won't eliminate it.

    However, if you do want to randomise, then the wanted function should build up a list instead of doing any processing on the files. You then shuffle that list, and only after that do you process the files.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://890800]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (9)
As of 2014-07-23 00:20 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (130 votes), past polls