Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic

parsing XML fragments (xml log files) with XML::Parser

by kgoess (Beadle)
on Mar 17, 2011 at 23:47 UTC ( #893874=perlquestion: print w/replies, xml ) Need Help??

kgoess has asked for the wisdom of the Perl Monks concerning the following question:

I was looking at logging some complex data, and thought that XML would be a good candidate, and wanted to look at an event-based parser in case I'm dealing with a lot of data--so I don't have to hold a giant DOM in memory.
<logentry>...more xml...</logentry>
<logentry>...more xml...</logentry>
<logentry>...more xml...</logentry>
But XML::Parser doesn't seem to support XML fragments, it throws "junk after document element" when it gets to the second <logentry> because it thinks the first <logentry> is the root node of the XML document. I'm not finding an option in there to tell XML::Parser not to bother validating that. I'm looking for something in the spirit of this post, anybody here have any ideas?
  • Comment on parsing XML fragments (xml log files) with XML::Parser

Replies are listed 'Best First'.
Re: parsing XML fragments (xml log files) with XML::Parser
by Your Mother (Bishop) on Mar 18, 2011 at 03:03 UTC

      Well, since you bring up your "first reaction"... My first reaction would be to roll my own XML parser in about 30 minutes using some simple regexes (combined into a single, easily understood regex the last time I did this). That takes less time than finding a decent XML module that can parse partial XML, much less also getting it installed, much much less figuring out how to use it.

      Adjusting the small block of code to suite your needs and situation becomes trivial compared to getting something as complex (and rigid) as an XML parsing module to bend so. For this occasion I had no use for empty tags so the code ignores them. Fill in what you want to do with them if anything.

      Naturally, I had no use for the nearly completely useless feature of CDATA so I didn't even worry about the regex to parse that junk. If you need it (the OP doesn't appear to), adding that feature is 5, maybe 10 minutes' work.

      Actually, I started out trying to use some XML module that had gotten decent reviews somewhere. I had it all working on the sample data and then when I finished the "download the data" part, the XML part suddenly just stopped working. It told me that there was no 'foo' tag despite '<foo ...' being clearly there and that being recognized as a 'foo' tag previously. Eventually I figured out that XML namespaces were to blame and after too much time trying to even find any documentation on such things in relation to the module, I decided to write a regex so I could have something working that day.

      Took less time to write the regex and get it working than it had taken me to get the module working on the test data. And the resulting code is just tons easier to make adjustments to.

      sub ParseXmlString { my( $str )= @_; my $name= '(?:\w+:)?\w+'; my $value= q< (?: '[^']+' | "[^"]+" ) >; my $s= '\s'; my $attrib= "$name $s* = $s* $value"; my $decl= "< $s* [?] $s* $name (?: $s+ $attrib )* $s* [?] $s* >" +; my $tag= "< $s* (/?) $s* ($name) (?: $s+ $attrib )* $s* (/?) $s +* >"; my $data= '(?: [^<>&]+ | &\#?\w+; )+'; my $hv= {}; my @stack; while( $str =~ m{ \G(?: ( $decl ) # $1 <?xml ...?> | ( $data ) # $2 encoded text | ( $tag ) # $3 <...>, $4 '/' or '', $5 tag name, $6 '/' +or '' | ( . ) # $7 we failed ) }xgc ) { if( $1 ) { $hv->{'.header'}= $1; } elsif( defined $2 ) { my $text= $2; if( $text =~ /\S/ ) { s-&lt;-<-g, s-&quot;-"-g, s-&gt;->-g, s-&apos;-'-g, s-&amp;-&-g for $text; push @{ $hv->{'.data'} }, $text; } } elsif( $4 ) { $hv= pop @stack; } elsif( $6 ) { # We currently just ignore empty tags } elsif( $3 ) { my $new= {}; push @{ $hv->{$5} }, $new; push @stack, $hv; $hv= $new; } elsif( defined $7 ) { my $beg= pos($str); my $len= 20; $beg -= $len/2; if( $beg < 0 ) { $len += $beg; $beg= 0; } die "XML failed to parse byte ", pos($str), " ($7), near ' +", substr( $str, $beg, $len ), "'.\n"; } else { die "Impossible!"; } } if( @stack ) { die "Unclosed XML tags"; } return $hv; }

      - tye        

        It told me that there was no 'foo' tag despite being clearly there and that being recognized as a 'foo' tag previously. Eventually I figured out that XML amespaces were to blame

        I find it very unfortunate that XPath requires you to specify the namespace. I wish libxml had an option to configure what it meant to have no prefix in an XPath node test:

        • Missing prefix = Match the null namespace. (Standard)
        • Missing prefix = Match some previously defined default namespace.
        • Missing prefix = Match any namespace.

        For those interested, it can't handle

        • Numerical entities (decimal and hex).*
        • External entities (e.g. HTML's &eacute).*
        • Character decoding.**
        • UTF-16, UTF-32, UCS-2, UCS-4.**
        • CDATA.
        • Namespace prefixes. (They're included as part of the name.)***
        • Comments.
        • Identification of an element's namespace.***
        • XML validation (i.e. it allows some malformed XML).
        • (more? this wasn't a thorough analysis)

        Up to you to decide if it fits your needs or not.

        * — A post-processor could fix this if no entities were processed at all.

        ** — A pre-processor such as the following would fix this:

        sub _predecode { my $enc; if ( $_[0] =~ /^\xEF\xBB\xBF/ ) { $enc = 'UTF-8'; } elsif ( $_[0] =~ /^\xFF\xFE/ ) { $enc = 'UTF-16le'; } elsif ( $_[0] =~ /^\xFE\xFF/ ) { $enc = 'UTF-16be'; } elsif (substr($_[0], 0, 100) =~ /^[^>]* encoding="([^"]+)"/) { $en +c = $1; } else { $enc = 'UTF-8'; } return decode($enc, $_[0], Encode::FB_CROAK | Encode::LEAVE_SRC); }

        *** — A post-processor could fix this, but one wasn't supplied.

        Update: Added pre-processor I had previously coded.

        my $data= '(?: [^<>&]+ | &\#?\w+; )+';
        should be
        my $data= '(?: [^<&]+ | &\#?\w+; )+';

        XML allows for unescaped ">"

        Tye, thanks for sharing, but that approach is wrong on so many levels all I can say is: Good luck with that!
      Thanks, parse_balanced_chunk was what I was looking for. But since I already wrote the stream parser handlers, I'm going to go with the suggestion a top-level parser to wrap it in a root element.
Re: parsing XML fragments (xml log files) with XML::Parser
by wind (Priest) on Mar 18, 2011 at 00:07 UTC
    Could just use XML::Simple and add a root node as needed
    use XML::Simple; use Data::Dumper; use strict; my $data = do {local $/; <DATA>}; my $ref = XMLin("<root>$data</root>"); print Dumper($ref); __DATA__ <logentry>...more xml 1...</logentry> <logentry>...more xml 2...</logentry> <logentry>...more xml 3...</logentry>
    Then again, that's not an event-based parser. Will consider other possible solutions...
Re: parsing XML fragments (xml log files) with XML::Parser
by mirod (Canon) on Mar 18, 2011 at 23:31 UTC

    Hmm... I think I saw something similar in the XML::Twig FAQ: Q22: I need to process XML documents. The problem is that they are several of them, so the parser dies after the first one, with a message telling me that there is junk after the end of the document. Is there any way I could trick the parser into believing they are all part of a single document?

    And of course XML::Twig will let you process the document one log entry at a time, without ever needing to have more than one in memory.

    And the XML brigade (which I am a proud member of) won't yell at you for parsing XML with regexp ;--)

    There are pure-XML ways to fake a single document, for example by creating an entity that points to the log file and including it in a fake XML document, but I am not sure it's simpler than what the FAQ suggest (pass an open tag first to the parser, then the log file, then a close tag).

Re: parsing XML fragments (xml log files) with XML::Parser
by GrandFather (Sage) on Mar 18, 2011 at 00:15 UTC

    If you can guarantee that there are no logentry elements nested in logentry elements then you can use a simple top level parser to pull logentry elements out as XML documents which XML::Parser would be happy with. The overall document wouldn't be a conformant XML document, but each logentry could be.

    True laziness is hard work
Re: parsing XML fragments (xml log files) with XML::Parser
by ikegami (Pope) on Mar 18, 2011 at 17:47 UTC
    XML::Twig is designed to do exactly this kind of work.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://893874]
Approved by GrandFather
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others scrutinizing the Monastery: (6)
As of 2020-09-27 08:02 GMT
Find Nodes?
    Voting Booth?
    If at first I donít succeed, I Ö

    Results (142 votes). Check out past polls.