doonyakka has asked for the wisdom of the Perl Monks concerning the following question:

I'm interested in using Perl for linguistic analysis, starting with morphological (inflectional) recognition and lemmatisation, and extending to more general linguistic modelling, ideally from a non-language-specific basis that can be adapted to different languages.

Is it a good idea to use Perl for this kind of thing? Has anything been written on the topic, or am I wasting my time?


Once again, back inside the sounds of DJ Rubbish.

Comment on Perl and Linguistics
Re: Perl and Linguistics
by JayBonci (Curate) on May 25, 2002 at 21:18 UTC
    Larry himself was in fact a linguist. If you take a look at the Lingua::* modules, that might be a good start as to what people have done in perl. It's very interesting, but completely not my field. Good luck.

Re: Perl and Linguistics
by cjf (Parson) on May 25, 2002 at 22:24 UTC
Re: Perl and Linguistics
by graff (Chancellor) on May 26, 2002 at 03:28 UTC
    ... extending to more general linguistic modelling, ideally from a non-language-specific basis that can be adapted to different languages.

    That's ambitious... but worth pursuing. The first thing that comes to my mind is (Hidden) Markov modelling, which has been demonstrated to do a decent job of drawing plausible "morphological" boundaries in a stream of text data in any given language. It appears that there are Markov modules on CPAN, but whether these are suitable to the task of language analysis is more than I know at present.

    I do know that Perl is quite useful for handling a lot of "infrastructure" work relating to the management and handling of language data; e.g. developing and searching a lexicon, locating and displaying/highlighting tokens in a text stream, mapping across character encodings, etc. Of course, a lot of useful tools have already been developed (some in Perl, some in C(++)) -- check the archives at (and/or join) the CORPORA mailing list:

    I'm sorry I can't give you any more detailed pointers or advice, but I hope this helps a little.

Re: Perl and Linguistics
by kappa (Chaplain) on May 26, 2002 at 14:06 UTC
    There some mailling lists dedicated to what you are going to attack (or close to the topic). See liguana project and especially perl-ai list which I recall being very interesting (I don't read it any more unfortunately).
Re: Perl and Linguistics
by mattr (Curate) on May 26, 2002 at 14:41 UTC
    Sounds like a great idea as I mentioned in some other threads. There are some morphological softwares in Perl like EMERGE and FLEMM at the Natural Language Software Registry in Germany, search for "perl". That db is heavier on Java however there are initiatives for linguistics in both perl and java. Some systems have apis for both languages, like WordNet which has lots of other interfaces too.

    I have used C++ based tools on linux for Japanese morphological analysis, such as chasen. Such tools are critical in Japanese and are used in indexing for a search engine (link), basically C++ is needed for speed in that case. Perl will let you develop more quickly and you can later roll time-sensitive functions in C/C++, or make a Perl API to some C++ tool if you need it.

    Actually why not just search with terms "perl" and "linguistics", or "computational linguistics". You will see that you are not alone, and may find some work. Computational linguistics courses seem to use Perl often. This page at Ohio State, Languages for computational linguistics, notes that Perl is phenomenally popular in the field and Java plays catch-up to Perl's feature set. It says, "Most work in industry is done in Perl and C++; Java can be expected to have a growing role as time goes on."

    There is also a page about how Perl was designed based on linguistics principles, and the document has the same title as this thread.. Perl & Linguistics.

    To be fair, Perl is not the only one on the block and it may even be that more java things are being created than perl things. But from what I can see Perl is the natural match to linguistics and the CPAN is a great way to share that work and see it used by many people. Otherwise you may like to check out Prolog, Haskell, and tools/languages used in knowledge engineering.

      Many thanks for some very helpful replies. Looks like there's some excellent work being done in this field. For some time now, I've been thinking of different ways of implementing various methods, including stochastic processing la Markov and more recent models like Rens Bod's Data Oriented Parsing

      The OpenNLP Grok Library looks interesting; mainly for Java. I'm somewhat familiar with Prolog but I'm interested in combining Perl's unequalled data munging capabilites with more sophisticated modelling applications.

      Looks like there's a lot of hard but fun work ahead!


      I have used C++ based tools on linux for Japanese morphological analysis, such as chasen. Such tools are critical in Japanese and are used in indexing for a search engine ...

      While I use Chasen -- which has by the way some rudimentary Perl bindings -- almost everday I am curious wether Chasen is really a good choice for a search engine? Thinking about speed, the difficulties to update Chasen's dictionary and to tune it for specific (topic) domains it would be nice to hear more about your experience with Chasen.

      For simple search engines I usually prefer a simple longest match algorithm as provided by Kakasi or my own tools.
        Sorry I do use kakasi as main tool in search engine now. But I use chasen sometimes for individual documents since I am under the impression that it is slower, more flexible, more sophisticated. I just mentioned Chasen because I remembered Nara and clustering, and that gave me chasen.

        For those who are not familiar with either tool, they are morphological analyzers of Japanese text. They are similar, though and generally are used to split a chunk of text into individual words (Japanese words are not usually separated by spaces) and to get the phonetic reading of those words (usually in roman alphabet).

        Obviously this is enabling technology. The name of Kakasi in fact is a kind of palindrome, in that read backwards phonetically you get the name of a popular front end processor which will take roman alphabet input and interactively pick the correct characters based on that phonetic reading and the context.

        I believe Kakasi is focussed more on workaday speed and useability while chasen might be more flexible. In particular there is some interesting use of chasen in document clustering work done in Nara and elsewhere I seem to remember. Couldn't find the exact page but google will help you look at the field. Personally where I use these tools is in custom search engines I build, usually either completely in Perl or with plugins from projects like the above. They are mainly useful it seems in building an inverted index to search a lot of text quickly but I have a small (a few megabytes) Japanese database that works fine just with (Japanese) regexes.

        I think it would be very interesting if Perl programmers could easily use state of the art computational linguistics or "A.I." algorithms (besides I guess what are already in perl) to make perl even more intelligent and perhaps automate some of the programming task. For example someone just gave me three nasty scripts to refactor together and update for 5.6.1, maybe perl could learn to tell me "Yep, those are real nasty scripts, better rewrite from scratch," or perhaps give me other insights into the code.

        I am no a computational linguist, just interested. There is an awful lot of science there, so if anybody has insights about it please share with the rest of us.

Re: Perl and Linguistics
by Hanamaki (Chaplain) on May 26, 2002 at 15:50 UTC
    Its defintely not wasted time to try linguistic analysis with Perl, but quit hard to answer your question, because we do not know your approach. Are you going to do analysis with handcoded rules or do you prefer an statistcal approach?

    A good start for statistical language processing may be Dan Melamed's collection of linguistic tools. An TPJ article on Perl and Morphology may be of interest as well.

    If you end up with really huge regular expressions it may be the time to implement an automata in C or whatever, but Perl is a great tool to produce linguistic prototypes.

    If you want to do research with Hidden Markov Models, try HTK for a non Perl start.