Ah, but what about native XML databases? Now there's hype for you, and you don't ever need to store data in files again!
Talking about opaque data, take a look at the mzXML file format. It's a way to store several mass spectrometry runs in a single file, including the parameters of the mass spectrometer, any extra processing done on the data, and other sorts of metadata.
Mass spectrometry data, since it's, in some sense, a sampled analog signal, consists of floating point pairs with the first number being the mass value (or rather the mass to charge ratio, but this is not important here) and the second the intensity for that mass value.
Now, the designers of the format had more clue than simply stuffing this in the following rather straightforward XML:
(That would be a nightmare!) Instead, they defined that everything else except peak data is structured metadata in the normal XML style, making a DOM tree, and the peak data itself is stored in a base-64 encoded string in IEEE floating point format in network byte order. So, what you have in the file is, in the case of raw data, a few dozen kilobytes of metadata, and then 130 megabytes of binary junk that is completely opaque to any human being.
While it is generally speaking laudable that people try to make common file formats for storing mass spectra -- as usual, all mass spectrometer manufacturers have their own file formats -- they could have just rolled their own file format without the burden of traversing DOM trees while parsing. I guess there were enough programmers in the bunch who were just thinking of the convenience of using standard XML parsing libraries...
In reply to Re^5: I dislike object-oriented programming in general