Can you tell us a little bit more about the process? From what I'm reading here, it sounds like you'll have different versions of files of the same name?
in reply to Tracking data versions throughout an analysis chain
If that's so, the first thing that comes to my mind is putting all the data files in a directory managed with a version control system, and then using Jenkins to run your process. Jenkins can checksum the output from a run, allowing you to upload an output to Jenkins and have it tell you exactly which run created it - you then can refer to the actual run log for that run to see what happened and what version of the files was used for input.
This way, all you need is a process that takes you files and gives you the output you're looking for; everything else is managed by your version control system and Jenkins. The workflow goes like this:
Should you need to make an output back to a run, you simply go to Jenkins and upload the output file; Jenkins then tells you which run it came from.
- New file is ready.
- Check this into the version control system directory that Jenkins is monitoring.
- Jenkins is triggered by the checkin.
- Jenkins checks out the current contents of the VCS directory.
- Jenkins runs the process and creates the output file. If the run fails, Jenkins has a log of the run.
- Jenkins notifies you the run has completed.
- You can use a Jenkins plugin to extract data from the log, or fetch it with WWW::Mechanize and analyze it youself.
- You distribute the output file to your customer with whatever extracted data you like.