good chemistry is complicated, and a little bit messy -LW |
|
PerlMonks |
Re: dynamic hash nestingby sundialsvc4 (Abbot) |
on Aug 17, 2012 at 12:14 UTC ( [id://987956]=note: print w/replies, xml ) | Need Help?? |
One thing that comes to mind right now is that you probably need to vet that data before you build algorithms that (necessarily...) rely upon your assumptions about it. Specifically, do you know that the beginning of each and every line consists of 0 .. n ASCII "Tab" characters? No spaces? Do you know that the number of tabs in any line is no more than 1 greater than the number of tabs on the line preceding it? Do you know that the first line contains no preceding tabs? I suggest that the first part of your processing sequence should be a script whose sole purpose is to validate these assumptions about the data format. Garbage in = Garbage out, and only the computer can tell you if it’s garbage. If the data is not proved to be clean, then do not process it. If the data is proved, then you can write algorithms based on any assumption that you have proved ... and yet, those subsequent algorithms should also be suspicious. (Maybe the data wasn’t vetted this time, and an error has crept into the data from somewhere upstream ... only the computer is in a position to sound the alarm.) Regular-expressions can easily handle these tests. Each and every line must consist, anchored at the beginning of line, of zero-or-more tabs followed by a non-blank character. If you extract the tabs in a group, the length of that string is of course the number of tabs. In the future, I think that you definitely need to move this data to an alternate format, either XML or JSON. Both of these are well-understood formats for representing hierarchical data, and in the case of XML, have formally-defined validation engines that do not require the writing of source-code.
In Section
Seekers of Perl Wisdom
|
|