|Problems? Is your data what you think it is?|
Philosophy: Got the Data, Now what?by raybies (Chaplain)
|on Sep 30, 2011 at 14:16 UTC||Need Help??|
raybies has asked for the
wisdom of the Perl Monks concerning the following question:
This might be a meditation of sorts, dunno... but I'm kinda stalled.
Essentially I have a large table of data that I was able to extract from a large C source codebase. Essentially each row represents a sourcefile/line# where there's a particular function call.
Further, Each row (function call instance) must be painstakingly checked and examined by users manually who must examine the codebase, classify the entries, and insert comments.
One of the end products of the data will be a comprehensive report on the system as a whole.
To make matters more complicated, the codebase may change over time, thus some of the automatically gathered data may need to change while the user entered records associated with those entries should stay the same and tied to them.
Currently with a couple scripts, I can gather the information automatically into a single hash table, keyed by the sourcefile:line#. (of course the line# may change in the future. entries may be deleted and added over time too...)
I'm thinking one approach might be to keep the latest "snapshot" of code. And periodically go to the codebase and generate the data fresh--then write a compare script to look for differences in the latest codebase repository.
I could just dump the hash (with storable) to save the actual data, but that's not something I can have multiple users manipulate say in a text tool to attach notes, change field values, etc.
My skills with SQL are pretty limited. (never been formally trained in DB/Information Management, so I consider myself the perpetual noob) I haven't a whole lot of experience with it--and I'm in a time-crunch (of course) so I'm struggling with jumping into a tool that's so huge that I drown in something like properly "formalizing my schemas"... or some such detail that I just don't know about yet. Then again the short answer might just be "Suck it up!" and "Get over it."
I'm curious at a higher level how one goes about managing data like this once they have it.