Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re^2: extract phrases of n-words length

by BrowserUk (Patriarch)
on Jun 25, 2009 at 06:12 UTC ( [id://774600]=note: print w/replies, xml ) Need Help??


in reply to Re: extract phrases of n-words length
in thread extract phrases of n-words length

Keys composed out of numerical indexes would save space ...

Not so! Using the text from your post as a representative abstract, and building the the 3 hashes using both the original words and integers representing those words I get the following hash sizes:

c:\test>774421-2 Using words: 44501 Using numbers: 40801

Giving a total saving just under 10%. But... If you add in the requirements of the array required to hold the ordered numbers--ordering is a requirement--plus a hash used to unique the words:

Numbers array: 12776 Lookup hash: 8860

The memory requirement is nearly 50% higher.

As for "and possibly search time.". As the hash keys are the concatenation of the string representations of the integers assigned, and hash keys are hashed byte-wise; and the average (English) words length is a little under 6-chars; and the average length of the integers required to represent them a little under 5-chars; the difference in the time taken to actually hash the keys will be very minimal. And as the number of pairs, triples & quads will be the same, the hashes will contain the same number of keys, so the lookup times will be very similar. These are the timings I got using the above sample:

Lookup words: 0.000440120697021484 seconds Lookup numbers: 0.000699043273925781 seconds Lookups words: 515; numbers:515 Lookup words: 0.000420093536376953 seconds Lookup numbers: 0.000321149826049805 seconds Lookups words: 515; numbers:515 Lookup words: 0.000285863876342773 seconds Lookup numbers: 0.000767946243286133 seconds Lookups words: 515; numbers:515

Too wildly varying to consider making an assessment.

The code used to produce the figures:

#! perl -sw use 5.010; use strict; use Time::HiRes qw[ time ]; use Data::Dump qw[ pp ]; use Devel::Size qw[ total_size ]; my $abstract = do{ local $/; <DATA> }; my @words = split ' ', $abstract; my %words; $words{ doubles }{ join ' ', @words[ $_ .. $_ +1 ] } = undef for 0 .. $#words - 1; $words{ triples }{ join ' ', @words[ $_ .. $_ +2 ] } = undef for 0 .. $#words - 2; $words{ quads }{ join ' ', @words[ $_ .. $_ +3 ] } = undef for 0 .. $#words - 3; say "Using words: ", total_size \%words; my( $n, %lookup ) = 0; my @numbers = map $lookup{ $_ } //= ++$n, @words; my %numbers; $numbers{ doubles }{ join ' ', @numbers[ $_ .. $_ +1 ] } = undef for 0 .. $#numbers - 1; $numbers{ triples }{ join ' ', @numbers[ $_ .. $_ +2 ] } = undef for 0 .. $#numbers - 2; $numbers{ quads }{ join ' ', @numbers[ $_ .. $_ +3 ] } = undef for 0 .. $#numbers - 3; say "Using numbers: ", total_size \%numbers; say "Numbers array: ", total_size \@numbers; say "Lookup hash: ", total_size \%lookup; my $start = time; my $words = 0; for my $size ( keys %words ) { exists $words{ $size }{ $_ } and $words++ for keys %{ $words{ $siz +e } }; } say "Lookup words: ", time() - $start, ' seconds'; $start = time; my $numbers = 0; for my $size ( keys %words ) { exists $numbers{ $size }{ $_ } and $numbers++ for keys %{ $numbers{ $size } }; } say "Lookup numbers: ", time() - $start, ' seconds'; say "Lookups words: $words; numbers:$numbers"; __DATA__ There are as I see it two scalability issues in the above code. The first of these has been addressed well by the various suggestions to use a sliding window ($i...$i+$n on the array of words) +. The second is the keys themselves. You are currently using the actual +words in each phrase as a key. This means that searching for all sequences o +f N-M words in a text with K word characters (e.g. alpha-numerics) could concievably take up (N+N+1+...+M)*K characters for its keys alone. The actual amount depends on the frequency of each particular word runs: "a a a ..." will obviously use less space than "a b c d e..." or "a..z b..za c..zab ...". If you intend to make either N, the range from N...M or K large, you might want to consider assigning each word in the text a integer i +d and composing your key out of the integers rather than the word string +s as you are doing now. Keys composed out of numerical indexes would save space and possibly search time.

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^3: extract phrases of n-words length
by ELISHEVA (Prior) on Jun 25, 2009 at 09:11 UTC

    What array to hold the ordered numbers? The algorithm I proposed had a moving window - an array that never held more than M integers where M is the maximum phrase length - it used the original word array to keep track of ordering. The code above translates the entire word array into numbers before generating the phrase combinations. That is like comparing slurping to line at a time file reads.

    The key question is: is my post representative of the word distribution of abstracts? Truthfully, I don't know, but you raise an interesting point. The space efficiency of the algorithm described above is highly dependent on the length of repeated words and the rate of word repetition without word order repetition - of which there is little in that text.

    It is hardly surprising that you would get the minimal space benefits you did with my post's text (only 10%). There are 186 distinct words (defined as runs bounded by whitespace, start of text or end of text) and both the median, average word repetition is less than 2 and word length is about 4.6 rather than 6. The average 3 word phrase contains only 11 characters including whitespace. This is only slightly longer than its numeric representation. Given that there are 186 words, the average numeric id is closer to three characters. For a phrase, that means an id of between 8 and 11 characters (2+3*2=8, 2+3*3=11).

    As an extreme example of why word order and word length matters, counting word runs in "a b a b a b" extended even ad-infinitum is going to cost more using indexing than not. Alhough words repeat greatly, only two phrases ever appear: "a b" and "b a". As words are the same length as the integers that represent them, the indexing mechanism would save nothing. The indexing hash takes up space and there is 0 saving in key length: key length of "1,2" and "2,1" is exactly the same as "a b", "b a".

    Best, beth

    Update:initially misunderstood what BrowserUK meant by array to hold the sequence.

    Update:added more statistics on text sample; added acknolwegement to BrowserUK.

      I guess the question is: are "abstracts" likely to be hugely long, and contain many repetitions of above average length words?


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://774600]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others contemplating the Monastery: (6)
As of 2024-04-23 17:26 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found