Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options

Incremental XML parsing

by Anonymous Monk
on Feb 04, 2012 at 08:22 UTC ( #951798=perlquestion: print w/replies, xml ) Need Help??

Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

I need to parse many large XML documents, so I want to use an incremental parser to conserve memory. The only incremental parser I could find was XML::SAX::Expat::Incremental, but it's too slow. Is it possible to use XML::LibXML::Reader to do this? Or is there a better alternative?

Replies are listed 'Best First'.
Re: Incremental XML parsing
by Corion (Patriarch) on Feb 04, 2012 at 08:30 UTC

    XML::LibXML is a DOM parser and hence wants to read the whole document into memory before doing anything. XML::Parser is a SAX parser and can give you callbacks instead. You might want to look at XML::Twig, which attempts to give you the best of both worlds, giving you subtrees as soon as they become available.

      You are right, but XML::LibXML::Reader is a pull parser, which means it does not load the entire file into memory. However, it can return the DOM object of any encountered node on request, which makes it more convenient than traditional SAX parsers.
      In the XML::Twig docs, it says:
      WARNING: this option is NOT used when parsing with the non-blocking parser (parse_start, parse_more, parse_done methods) which you probably should not use with XML::Twig anyway as they are totally untested!

        XML::Twig can (and is actually designed to) parse big document. It doesn't do it by using an incremental parser, but by building a (non-DOM) tree for each element it parses. It lets you call handlers on elements, using the twig_handlers or twig_roots options. These handlers are called as soon as the element is completely parsed. In the handler you can process the element, then release the memory it used, by calling purge or flush (which also outputs the tree so far). This doesn't work when you need to have the entire tree available for processing, but in practice it does work in most cases, where you can treat the overall XML as a collection of independant elements. See the section Processing an XML document chunk by chunk in the docs.

        Using XML::Parser incremental parsing methods probably works in XML::Twig, I just never tested it, because that's not what XML::Twig needs. It does just fine with the regular interface, calling handlers during parsing.

        XML::LibXML offers a different interface to parse incrementally the XML, look into XML::LibXML::Reader.

        Thatg warning is from the documentation of "keep_encoding". I don't understand how it applies to the problem of parsing a file withohut creating the complete DOM in memory. Can you please explain how it applies?
Re: Incremental XML parsing
by gray (Beadle) on Feb 04, 2012 at 12:28 UTC

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://951798]
Approved by toolic
Front-paged by tye
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (4)
As of 2023-12-02 15:16 GMT
Find Nodes?
    Voting Booth?
    What's your preferred 'use VERSION' for new CPAN modules in 2023?

    Results (18 votes). Check out past polls.