Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re^2: Removing repeated words

by Aristotle (Chancellor)
on Sep 21, 2002 at 08:51 UTC ( [id://199716]=note: print w/replies, xml ) Need Help??


in reply to Re: Removing repeated words
in thread Removing repeated words

That won't fly.

For one, sort reads its entire input before outputting a single character, so the sort process will grow not only comparably to the hash inside the perl process, it will infact grow larger than the entire input file. It does considerably more work too - the single-pass approach doesn't need to sort the data since it uses a hash to keep words unique.

Secondly, the hash you're creating in the second pass is exactly as large as the hash would be at the end of the single-pass script - they both contain all unique words in the file. But you create that hash before you start processing, so your second pass will start out with as much memory consumed as the single-pass scripts reach only by the end of their processing.

Makeshifts last the longest.

Replies are listed 'Best First'.
Re: Re^2: Removing repeated words
by Thelonius (Priest) on Oct 03, 2002 at 21:30 UTC
    Well, I doubt anybody will read this since it's so late, but I was, like, busy, ya know?
    For one, sort reads its entire input before outputting a single character, so the sort process will grow not only comparably to the hash inside the perl process, it will infact grow larger than the entire input file. It does considerably more work too - the single-pass approach doesn't need to sort the data since it uses a hash to keep words unique
    Notice that I was proposing an external sort, so your first objection is not correct. The external sort does not use much memory at all, only disk space. It will be slower unless the hash runs out of memory, in which case it would be much faster.
    Secondly, the hash you're creating in the second pass is exactly as large as the hash would be at the end of the single-pass script - they both contain all unique words in the file.
    No, read again; my hash contains only a list of the duplicated words. The words that are truly unique will hever be in the hash at all. Of course, it's possible that all the words are duplicated at least once, in which case you are right.

    Also, I suspect that he really wants a list of all the unique words. If he doesn't care about the order, the "sort -u" may well be faster.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://199716]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (3)
As of 2024-04-20 02:50 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found