Come for the quick hacks, stay for the epiphanies. | |
PerlMonks |
Re^5: How do I create an array of hashes from an input text file?by Kc12349 (Monk) |
on Nov 16, 2011 at 17:46 UTC ( [id://938428]=note: print w/replies, xml ) | Need Help?? |
In terms of gaining speed, this would come from avoiding having to parse the entire xml document. I'm not sure you can avoid this. Given that parsing has to take place in either case, you can make trade offs of speed versus memory after that point, but you will still have that initial time investment in parsing. This is where something like XML::Bare would help you out, as it will parse an order of magnitude at least faster than XML::Simple. I have gone down the road of trying to find clever solutions to process very large xml files quickly, but ultimately I have generally settled on a file management solution instead. Simply breaking your xml documents into smaller logical pieces will give you more speed gains than a strictly xml parsing approach. I however did this in a case where I needed to process all records in the xml and wanted to gain simple parsing speed. This may mean something as simple as keeping books with titles beginning with a certain letter in individual files. This is not really applicable to your case if you want to be able to filter by any field. My real advice would be to look at a database solution instead. This is really the only way to return matching records with reliable speed. If you are more comfortable with text files, at least to start, you can look at DBD::CSV as a way to get your foot in the DBI door. If you go with XML::Bare, be aware that in my experience the ForceArray parameter does not produce expected results. I created my own work around for this to process the data structure afterwards into what XML::Simple with ForceArray would produce. I can pass it along to you if you go this route.
In Section
Seekers of Perl Wisdom
|
|