Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

Re: Create a dictionary from wikipedia

by cavac (Parson)
on Jul 31, 2012 at 18:06 UTC ( [id://984638]=note: print w/replies, xml ) Need Help??


in reply to Create a dictionary from wikipedia

Wikipedia ... meaningful content

Uhm, i'm not sure that is a problem that actually can be solved by using Perl.... scnr.

Back on topic, since you have the "raw" articles, you have to do multiple things. First, you have to remove all the markup. That alone does not seem trivial, since the MediaWiki format is a big mess to begin with. It's actually messy enough that more and more editors quit and the MediaWiki developers don't seem to be able to come up with a working visual editor.

A quick and dirty solution for this would be to try to use one of the MediaWiki-to-HTML converters like Text::Markup::Mediawiki and then scrape the text by using something like HTML::Extract.

Then, you can split resulting text on whitespaces. For each word then increment the counter in the hash.

since sometimes words are also used as double words (like "flying dutchman"), you might want to count them as well and see where it leads you. For this, consider 774421.

"I know what i'm doing! Look, what could possibly go wrong? All i have to pull this lever like so, and then press this button here like ArghhhhhaaAaAAAaaagraaaAAaa!!!"

Replies are listed 'Best First'.
Re^2: Create a dictionary from wikipedia
by vit (Friar) on Aug 01, 2012 at 22:09 UTC
    First, you have to remove all the markup. That alone does not seem trivial, since the MediaWiki format is a big mess to begin with.
    All I need is to parse the text from an xml dump of the articles enwiki-latest-pages-articles.xml to create a clean dictionary with good statistics of terms. I kind of hoped there exists a module which retrieve the pure text from the content. Once I have it, creating a dict. is a one line code.
    Yes, I already found that MediaWiki parser does not do it, but at least gracefully a reads multi-giga file. I think I need probably to apply some filtering. Say retrieve only rows without special characters hoping that those have only pure text or so from what MediaWiki parser gives me. So something like that:
    $pages = Parse::MediaWikiDump::Pages->new("xml file"); while(defined($page = $pages->next)) { $text = $page->text; ## process text, which is quite messy }

      It seems to me that the sticking point is going to be deciding what qualifies as "pure text." Getting the content out of the XML is fairly trivial: just walk recursively through the XML file after loading it into some XML parsing module, and grab the values of the "content" keys. (I only downloaded about 0.2% of the file as a sample, but that appears to be consistent.) The simple bit of code below does that, counts the "words" in a dictionary hash, and outputs the sorted results. However, since it splits the text on whitespace, the resulting words contain a lot of punctuation, including wiki formatting. So you'll have to parse that out, and also deal with other issues: Unicode and HTML encoded characters, embedded HTML tags, "wide characters," and more.

      #!/usr/bin/env perl use Modern::Perl; use XML::Simple; use Data::Dumper; my $xml = XML::Simple->new(); my $in = $xml->XMLin('wiki.xml'); my %dict; walk($in); for (sort {$a cmp $b} keys %dict){ say "$dict{$_} $_"; } sub walk { my $h = shift; for my $k (keys %$h){ if($k eq 'content'){ add_to_dict($h->{$k}); } elsif( ref($h->{$k}) eq 'HASH' ){ walk($h->{$k}); } } } sub add_to_dict { my $text = shift; for my $w (split /\s+/, $text){ $dict{$w}++; } }

      Aaron B.
      Available for small or large Perl jobs; see my home node.

Re^2: Create a dictionary from wikipedia
by linuxkid (Sexton) on Jul 31, 2012 at 19:53 UTC

    There are ways to remove the markup using regexes. Try this:

    $page = "my ##Media Wiki [text|here]"; %wordcount; @words = split /(\s*|#|\[|\||\]|@|$|!|.|,)/ $page; foreach $word (@words) { $wordcount{$word}++ if $word =~ /\w/; } foreach $word (keys %wordcount) { print "$word\t$wordcount{$word}\n"; }
    I hope this helps.

    --linuxkid


    imrunningoutofideas.co.cc

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://984638]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others learning in the Monastery: (8)
As of 2024-04-20 00:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found