http://www.perlmonks.org?node_id=977170


in reply to Tracking data versions throughout an analysis chain

Can you tell us a little bit more about the process? From what I'm reading here, it sounds like you'll have different versions of files of the same name?

If that's so, the first thing that comes to my mind is putting all the data files in a directory managed with a version control system, and then using Jenkins to run your process. Jenkins can checksum the output from a run, allowing you to upload an output to Jenkins and have it tell you exactly which run created it - you then can refer to the actual run log for that run to see what happened and what version of the files was used for input.

This way, all you need is a process that takes you files and gives you the output you're looking for; everything else is managed by your version control system and Jenkins. The workflow goes like this:

  1. New file is ready.
  2. Check this into the version control system directory that Jenkins is monitoring.
  3. Jenkins is triggered by the checkin.
  4. Jenkins checks out the current contents of the VCS directory.
  5. Jenkins runs the process and creates the output file. If the run fails, Jenkins has a log of the run.
  6. Jenkins notifies you the run has completed.
  7. You can use a Jenkins plugin to extract data from the log, or fetch it with WWW::Mechanize and analyze it youself.
  8. You distribute the output file to your customer with whatever extracted data you like.
Should you need to make an output back to a run, you simply go to Jenkins and upload the output file; Jenkins then tells you which run it came from.
  • Comment on Re: Tracking data versions throughout an analysis chain

Replies are listed 'Best First'.
Re^2: Tracking data versions throughout an analysis chain
by throop (Chaplain) on Jun 22, 2012 at 20:06 UTC
    Thanks! That sounds like what I was looking for.