|No such thing as a small change|
I'm attempting to implement a basic hierarchical agglomerative clustering algorithm to be used to create an arbitrary number of clusters from an existing dataset. More reading about the general concept can be found at at http://cgm.cs.mcgill.ca/~soss/cs644/projects/siourbas/sect5.html or your friendly neighborhood google.
A word about the dataset.
My data consists of some 1500-5000 "items" each of which contains a set of "words". These words are 5-30 character strings. Each set of words contains no duplicates. There are between 5-100 "words" in a set.
Some words about the existing code.
The theoretical complexity of such an algorithm is something like O(cn2d2) but I suspect my implementation is considerably worse since I ran it for over 11 hours and it only managed to consolidate 500 of the 1600 items.
The "merge" function is obviously very silly, I wrote it without thinking very hard and it doesn't do much. On the other hand I don't think it impacts the performance.
The vast majority of the time spent is going to be in the max_diff function, which appears to get exponentially slower as the program continues to run.
The datastructure being produced is necessary, that is it should be a binary tree made of array-refs where each leaf is either another tree or an actual item. (Its necessary because we don't know how many clusters we want to produce).
Suggestions for optimizations or even different algorithms gratefully received.