Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re^4: randomising file order returned by File::Find

by jeffa (Bishop)
on Mar 01, 2011 at 22:27 UTC ( [id://890849]=note: print w/replies, xml ) Need Help??


in reply to Re^3: randomising file order returned by File::Find
in thread randomising file order returned by File::Find

"... then you're stuck with starting the whole process over from scratch."

True ... but Hadoop scales linearly, meaning what used to take multiple hours or days to run now only takes a few hours, maybe even a few minutes. Such termination becomes trivial. I do not know how familiar you are with Hadoop/cloud computing.

jeffa

L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)

Replies are listed 'Best First'.
Re^5: randomising file order returned by File::Find
by BrowserUk (Patriarch) on Mar 01, 2011 at 22:37 UTC
    True ... but Hadoop scales linearly, meaning what used to take multiple hours or days to run now only takes a few hours, maybe even a few minutes.

    So does the server/clients scheme. The difference is in the level of control.

    Such termination becomes trivial.

    For some types of processing. For other types, the cost of throwing away the results of a job when it is 99% complete and starting over can be very high.

    I do not know how familiar you are with Hadoop/cloud computing.

    Not so much. But it isn't so different with stuff I was doing 15 years ago on a server farm.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      "So does the server/clients scheme. The difference is in the level of control."

      Right, with Hadoop, all that hard work is done for you. Why roll another wheel?

      "Not so much. But it isn't so different with stuff I was doing 15 years ago on a server farm."

      Except that now the hardware can (realistically) support the volumes of data being processed. You should check it out.

      jeffa

      L-LL-L--L-LL-L--L-LL-L--
      -R--R-RR-R--R-RR-R--R-RR
      B--B--B--B--B--B--B--B--
      H---H---H---H---H---H---
      (the triplet paradiddle with high-hat)
      
        Right, with Hadoop, all that hard work is done for you.

        Hm. Only once you have turned over your cluster to single purpose, Java-based HDFS monoculture. And if all your processing needs can be force fitted into that monoculture's way of working.

        But, if your hardware resources have to serve a variety of needs and uses...

        Besides, it does not have to be "hard work". There is a very simple pattern to be followed, exemplified by a server instance something like:

        #! perl -slw use strict; use threads; use IO::Socket; my $pause :shared = 0; async{ while( <STDIN> ) { chomp; if( /^suspend/i ) { $pause = 1; } elsif( /^resume/i ) { $pause = 0; } } }->detach; my $lsn = IO::Socket::INET->new( Listen => 1, LocalPort => 12345 ); while( my $fname = <*.png> ) { my $client = $lst->accept; print $client $fname; }

        And a client template:

        #! perl -slw use strict; use IO::Socket; my $server = shift; while( 1 ) { my $svr = IO::Socket->new( $server ); my $fname = <$svr>; close $svr; ## Process $fname. }

        The work items distributed by the server can be anything you like besides filenames. The processing inside the client is the same bit you'd have to write yourself for you hadoop application. Hardly onerous, but very flexible.

        Except that now the hardware can (realistically) support the volumes of data being processed.

        The scale of things is relative. We didn't have as much data to process back then as now, but disks were smaller and machines slower and more expensive. But the trade-offs of monocultural versus flexible remain the same.

        Transaction engines like CICS--the hadoop's of that time--could process prodigious volumes of data through a very specific small band of processing requirements, but couldn't handle the variety of general purpose programming requirements and applications that arose.

        The same problem arises today with map/reduce. If you're requirements fit its way of working--datasets that are infinitely partitionable into fixed-sized chunks that each take the same amount of time to process and don't require feedback loops, you're laughing.

        But, if your images can vary widely in size and so do not fit into the fix-sized chunks of HDFS; and processing times vary greatly with size--many image processing algorithms increase exponentially with the size of the image--then map/reduce scheduling algorithms get tied in knots.

        You should check it out.

        My needs, resources and pockets do not lend themselves to such.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://890849]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (4)
As of 2024-03-19 05:55 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found