This is a bit like decision tree learning
from AI theory. In learning theory, you would like the computer to be able to classify a large set of data. Each data point has some (known) attributes that you can test. You give the computer a small set of example items (a training set), for which you know their attributes and their final classification. In decision tree learning, you want to use this training set to build a "good" decision tree (which you can think of as a flow-chart) for classifying all the data.
One good way to build decision trees is by applying information theory. Each test of an attribute (i.e, decision point in a flowchart) partitions the dataset in such a way that the uncertainty ("entropy," in information theory terms) is reduced. So when deciding which attribute to test first, you simply pick the test which decreases the entropy the most. Then repeat this process for the two branches of the decision point. All that's left is to work out the pesky math details (which aren't all that bad). If this made no sense, a very nice demo of building a decision tree is at this site.
So while this isn't *exactly* what you're doing here, I think it's related. It's a more general way to find a good hierarchical decomposition of a dataset's attributes. In the case of decision trees, the goal is to have a concise flowchart to categorize data, but I'm not sure what the goal is in your case. You seem to be analyzing more of an interrelation between the attributes. But perhaps one could apply concepts from decision trees to your situation.