One good way to build decision trees is by applying information theory. Each test of an attribute (i.e, decision point in a flowchart) partitions the dataset in such a way that the uncertainty ("entropy," in information theory terms) is reduced. So when deciding which attribute to test first, you simply pick the test which decreases the entropy the most. Then repeat this process for the two branches of the decision point. All that's left is to work out the pesky math details (which aren't all that bad). If this made no sense, a very nice demo of building a decision tree is at this site.
So while this isn't *exactly* what you're doing here, I think it's related. It's a more general way to find a good hierarchical decomposition of a dataset's attributes. In the case of decision trees, the goal is to have a concise flowchart to categorize data, but I'm not sure what the goal is in your case. You seem to be analyzing more of an interrelation between the attributes. But perhaps one could apply concepts from decision trees to your situation.