Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

MedlineParser: to parse and load MEDLINE into a RDBMS

by BioGeek (Hermit)
on Feb 27, 2005 at 23:33 UTC ( [id://434946]=CUFP: print w/replies, xml ) Need Help??

Some time ago I was struggeling placing a copy of the OMIM database locally on my computer, into MySQL. OMIM is a database of human genetic diseases and provided through the National Library of Medicine (NLM). Today, I came across a paper describing software that did exactly that, but with MEDLINE, another database of the NLM. I will mention it here, in the hope that I can help other bioinformatics monks with it.

Medline is the NLM's bibliographic database covering the fields of medicine, dentistry, nursing, veterinary medicine, healthcare administration, and the pre-clinical sciences dating back to 1966. It indexes articles from more than 4,600 international journals published in the U.S. and 70 other countries and contains all citation information for each paper, as well as abstracts for most of the papers. The usual way in which users query MEDLINE is through PubMed, a web-based interface and search engine.

Researchers who use MEDLINE for text mining, information extraction, or natural language processing may benefit from having a copy of MEDLINE that they can manage locally. Diane E. Oliver, Prof. Adam Arkin and collegues from the universities of Stanford and Berkeley developed software tools to parse the MEDLINE data files and load their contents into a relational database. Although the task is conceptually straightforward, the size and scope of MEDLINE make the task nontrivial.

The entire content of MEDLINE is available as a set of text files formatted in XML (eXtensible Markup Language). The NLM distributes these files at no cost to the licensee, but the files are large and not easily searched without additional indexing and search tools. For example, in the 2003 release of MEDLINE, there are 396 files (which cover citations through 2002), and the total uncompressed size of these files is 40.8 gigabytes (GB).

The MedlineParser program was run on a networked Sun Enterprise 3500 server with eight 400-MHz processors and 4 GB of RAM (for reading input files and writing intermediate output files) using Oracle 9i.

It took 196 hours (8 days and 4 hours) for the Perl MedlineParser to load MEDLINE. A similar implementation written in Java (tar.gz file), and run on an Intel system (Linux), using IBM's DB2 database management system loaded the database in 76 hours (3 days and 4 hours).

There were numerous differences between the two systems, and it was not possible to test each variable independently. It is believed that differences in processor speed, memory, disk read-write efficiency, and optimization methods employed in commercial database-management systems may have affected loading times.

The Perl code is less flexible and not as readily extensible as the object-oriented code of the Java software, but the functionality offered by the resulting database implementations is very similar.

The open-source code for this most current version of MedlineParser is available at http://biotext.berkeley.edu.

Source: most of the text for this node came from the following article: Tools for loading Medline into a local relational database Diane E. Oliver, Gaurav Bhalotia, Ariel S. Schwartz, Russ B. Altman, Marti A. Hearst, BMC Bioinformatics 2004, (7Oct2004)

Update: I originally posted the source coude here, but then this node got over its size limit, but you can find the code to parsemedline.pl here.

Replies are listed 'Best First'.
Re: MedlineParser: to parse and load MEDLINE into a RDBMS
by graff (Chancellor) on Feb 28, 2005 at 03:13 UTC
    The "parsemedline.pl" code that you cited (at the biotext.berkeley.edu web site) could have been a lot shorter (with no loss of intelligibility or maintainability), if the code authors had made more thoughtful use of perl data structures (HoH, HoA, HoHoH, and so on), instead of declaring vast numbers of simple arrays with long names. (Personally, I think shorter code is easier to maintain; and declaring arrays to keep track of the names of hash keys is a lot easier than keeping lots of differently-named arrays.)

    As for run-time efficiency compared to a java implementation, I don't know how the java version handles RDBMS insertions, but the perl version cited here is obviously working at a serious disadvantage. The code is doing two things that I would normally call bad ideas:

    • It is doing a presumably large number of inserts via DBI, instead of using a native flat-file loader facility that comes with virtually every RDBMS. If you simply convert XML data to a flat file and feed that to something like "mysqlimport" or oracle's "sqlload", the database will be loaded in a small fraction of the time that DBI would take doing "insert into..." statements.
    • To make matters worse, for each insert statement, this code is preparing a new statement with quoted values, executing, and then calling "finish" on the statement. If it simply stored a set of prepared statements, using the "?" placeholders for values to be inserted, run-time would be noticeably faster (though still slow compared to an RDBMS native text-file-import tool).
    One other nit-pick: the commentary in the code is quite good as documentation, but it would be better as POD (and this would be so easy -- there's no good reason not to do so).

    I counted over 3700 lines (excluding blanks and comments) in "parsemedline.pl"; I don't know whether I'd try to boil it down (not sure I want to pull in 40+GB of data from a field I know nothing about), but as a rough estimate, I'd guess this could be done, using appropriate data structures and loops, with well under 1000 lines. Hard to say what sort of speed differences would result, but if there is ever any "evolution" in the XML data format, a shorter version of the code would be a lot easier to fix, I think.

      To address your concern about efficient data import: both the java and perl programs have options to generate flat-file representations of the tables for native table loaders.

      Some other points about the BioText parsemedline.pl program:

      • as with the java program, it is unnecessarily database-specific (although only in a very minor way compared with the java code);
      • it was written without the strict or warnings pragmas, and as a result, there are actual (minor) bugs in it due to misspelling of the lengthy variable names;
      • Medline keeps changing its DTDs, so to capture all the data in the 2005 release of Medline requires some tedious changes to the SQL table definitions and code.

      Question: I know this is not necessarily a good idea for performance reasons, both in loading and querying, but ... is it possible to automatically translate DTD descriptions into SQL DDL and corresponding code to parse the XML and load the data? (Ignoring the complication of data types).

        I know this is not necessarily a good idea for performance reasons, both in loading and querying, but ... is it possible to automatically translate DTD descriptions into SQL DDL and corresponding code to parse the XML and load the data?

        How do you know (or what makes you think) that parsing a DTD is "not ... a good idea for performance reasons ..."? I doubt that using this sort of facility would have any noticeable impact on run-time performance, and it could certainly be a major boost to programmer performance (and would a good way to reduce code that is too bulky and ad-hoc).

        There appear to be at least a couple modules on CPAN for converting DTD's into perl-internal objects or data structures: XML::DTDParser, XML::Smart::DTD. (I haven't used either of them myself, but a brief look at the docs makes me think the first one might be more suitable; I expect there are others.)

        As for converting a perl object or data structure into an SQL DDL (or going directly from DTD to DDL), I haven't searched CPAN for that (maybe you could try it), but it seems like it could be less of a cut-and-dried sort of task; there might be different ways of specifying a table, or designing a set of relational tables, based on a given DTD, depending on what the SQL users want to do with the data.

        (The same could be said for deriving different perl data structures from a DTD, but since people have already posted solutions for this on the CPAN, it might be worth trying what they've come up with.)

Re: MedlineParser: to parse and load MEDLINE into a RDBMS
by kvale (Monsignor) on Feb 28, 2005 at 03:35 UTC
    Ah - someone else is interested in this sort of stuff, too!

    Here is a perl program that queries the PubMed site and adds it to a BibTeX database. It would be simple enough to add a DBI backend. I also found the source code above turgid. Using better data structures, the following program I wrote several years ago weighs in at 427 lines. If I was rewriting it these days, I would probably add a Perl/Tk interface.

    -Mark

Re: MedlineParser: to parse and load MEDLINE into a RDBMS
by jZed (Prior) on Feb 28, 2005 at 01:47 UTC
    Very cool! Thanks.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: CUFP [id://434946]
Approved by Tanktalus
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (8)
As of 2024-03-28 10:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found