Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re^7: randomising file order returned by File::Find

by BrowserUk (Pope)
on Mar 01, 2011 at 23:24 UTC ( #890862=note: print w/ replies, xml ) Need Help??


in reply to Re^6: randomising file order returned by File::Find
in thread randomising file order returned by File::Find

Right, with Hadoop, all that hard work is done for you.

Hm. Only once you have turned over your cluster to single purpose, Java-based HDFS monoculture. And if all your processing needs can be force fitted into that monoculture's way of working.

But, if your hardware resources have to serve a variety of needs and uses...

Besides, it does not have to be "hard work". There is a very simple pattern to be followed, exemplified by a server instance something like:

#! perl -slw use strict; use threads; use IO::Socket; my $pause :shared = 0; async{ while( <STDIN> ) { chomp; if( /^suspend/i ) { $pause = 1; } elsif( /^resume/i ) { $pause = 0; } } }->detach; my $lsn = IO::Socket::INET->new( Listen => 1, LocalPort => 12345 ); while( my $fname = <*.png> ) { my $client = $lst->accept; print $client $fname; }

And a client template:

#! perl -slw use strict; use IO::Socket; my $server = shift; while( 1 ) { my $svr = IO::Socket->new( $server ); my $fname = <$svr>; close $svr; ## Process $fname. }

The work items distributed by the server can be anything you like besides filenames. The processing inside the client is the same bit you'd have to write yourself for you hadoop application. Hardly onerous, but very flexible.

Except that now the hardware can (realistically) support the volumes of data being processed.

The scale of things is relative. We didn't have as much data to process back then as now, but disks were smaller and machines slower and more expensive. But the trade-offs of monocultural versus flexible remain the same.

Transaction engines like CICS--the hadoop's of that time--could process prodigious volumes of data through a very specific small band of processing requirements, but couldn't handle the variety of general purpose programming requirements and applications that arose.

The same problem arises today with map/reduce. If you're requirements fit its way of working--datasets that are infinitely partitionable into fixed-sized chunks that each take the same amount of time to process and don't require feedback loops, you're laughing.

But, if your images can vary widely in size and so do not fit into the fix-sized chunks of HDFS; and processing times vary greatly with size--many image processing algorithms increase exponentially with the size of the image--then map/reduce scheduling algorithms get tied in knots.

You should check it out.

My needs, resources and pockets do not lend themselves to such.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.


Comment on Re^7: randomising file order returned by File::Find
Select or Download Code
Re^8: randomising file order returned by File::Find
by jeffa (Chancellor) on Mar 01, 2011 at 23:29 UTC

    "Only once you have turned over your cluster to single purpose, Java-based HDFS monoculture. And if all your processing needs can be force fitted into that monoculture's way of working."

    That is simply not true. Look into Hadoop streaming.

    "My needs, resources and pockets do not lend themselves to such."

    That's too bad. I am currently working for a company and we are in the process of replacing our traditional means of processing the extremely massive volumes of data we consume with Hadoop. It is beyond awesome. Again, you should check it out instead of making assumptions. Cheers! :)

    jeffa

    L-LL-L--L-LL-L--L-LL-L--
    -R--R-RR-R--R-RR-R--R-RR
    B--B--B--B--B--B--B--B--
    H---H---H---H---H---H---
    (the triplet paradiddle with high-hat)
    
      That is simply not true. Look into Hadoop streaming.

      You cannot process images by reading them line by line from STDIN.

      instead of making assumptions.

      Who's assuming here?


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        "Who's assuming here?"

        Surely we both are, but you seem to act like even though you have not studied or tried Hadoop, you know everything about it. You assumed that just because Hadoop is Java-based, you cannot use other languages within it. You also seem to assume that you know better than Hadoop, and, if true, that is just being vain. Cheers! :)

        Update: I guess my mistake was assuming you actually cared about new technology that Perl can be a part of.

        jeffa

        L-LL-L--L-LL-L--L-LL-L--
        -R--R-RR-R--R-RR-R--R-RR
        B--B--B--B--B--B--B--B--
        H---H---H---H---H---H---
        (the triplet paradiddle with high-hat)
        

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://890862]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (10)
As of 2014-09-16 15:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (33 votes), past polls