|We don't bite newbies here... much|
The xml feed can be quite huge, (~150Gb max.)
Often as not with XML files that big, the feed consists of one top level tag that contains a raft of much smaller, identical (except an ID) substructures:
It therefore becomes quite simple to do a preliminary parse of the datastream and break the huge dataset down into manageable chunks for processing:
Of course, this 'breaks the rules' of XML processing, and requires you to assume some knowledge of the details of the XML you will be processing. But then the kinds of details required are usually, a) easily discovered; b) rarely change; c) easily catered for when and if they do change.
So if you favour the pragmatism of getting the job done over more esoteric -- and revenue sink -- criteria such as 'being correct', bending the rules a little can save you a lot of time, effort and expense.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.