in reply to Re^8: Incremental XML parsing in thread Incremental XML parsing
Too large in what way? I don't really see how you can gain anything by parsing just part of the XML at a time, unless you only need something near the start of the XML - but in that case you could just as easily keep doing a check for the end tag of the section you want, parse out just that part, and feed it to your regular XML parser. Similarly, if your document is a long series of records, you could parse out each record as it arrives and feed it to your regular XML parser. There's no need to go looking for an incremental solution, imho.
Re^10: Incremental XML parsing
by Anonymous Monk on Feb 05, 2012 at 11:37 UTC
|
Really, you need a definition of large? My example was if the file couldn't fit on the filesystem. Does that need clarification? And yet another person who didn't read anything else in the thread. I mentioned an existing incremental parser in my OP: XML::SAX::Expat::Incremental, but my problem was it's too slow. Your proposal would be even slower. | [reply] |
|
So it doesn't fit on the filesystem, fine, where does it come from then? If it comes from a socket or another similar source, then just point XML::Rules or XML::Twig at the socket and the data will be parsed as it comes and the defined handlers will get called as soon as a defined logical unit (read ... tag with its children) is complete. What do you keep in memory or on the disk after processing that logical unit is up to you.
Stop looking for something containing the word "incremental" in its name!
Jenda
Enoch was right!
Enjoy the last years of Rome.
| [reply] |
|
| [reply] |
|