Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid

module for cluster analysis

by zli034 (Monk)
on Dec 24, 2006 at 22:55 UTC ( #591547=perlquestion: print w/replies, xml ) Need Help??
zli034 has asked for the wisdom of the Perl Monks concerning the following question:

I have hundred lines of arrays of data. Each array is a sampled representation of a distribution.

I want to calculate standard deviation and mean for data array, then use the standard deviation and mean to perform cluster analysis. The aim is to group similar arrays in to one cluster.

What is a good perl module to do this job? Or you know there are better and faster method to get job done instead calculate mean and standard deviation, please give me some hints.

Replies are listed 'Best First'.
Re: module for cluster analysis
by madbombX (Hermit) on Dec 24, 2006 at 23:44 UTC
    Have you even checked CPAN? The module that I used for Standard Deviation is Math::NumberCruncher. Although I am sure there are more modules than that, it has worked for me.
Re: module for cluster analysis
by moklevat (Priest) on Dec 25, 2006 at 02:16 UTC
    Assuming you are familiar with common approaches to cluster analysis, you should look into Algorithm::cluster which is an interface for a fast C clustering library.

    If you are willing to look outside perl, R also does a great job with data analysis of all types, including clustering.

Re: module for cluster analysis
by zentara (Archbishop) on Dec 25, 2006 at 14:18 UTC
Re: module for cluster analysis
by lin0 (Curate) on Dec 25, 2006 at 16:35 UTC

    Hi zli034,

    It is possible to do cluster analysis in Perl. You could use the functions in the Algorithm::Cluster module that was mentioned before or you could write your own clustering functions using the PDL module that was also mentioned before. However, before getting deep into those modules I recommend you to have a look at the Wikipedia entry on Data Clustering.

    You said:

    I want to calculate standard deviation and mean for data array, then use the standard deviation and mean to perform cluster analysis. The aim is to group similar arrays in to one cluster.

    There are two assumptions in your comment:

    1. your data follow a normal distribution. Are you sure of that? Are you taking into account the presence of outliers. Outliers will affect negatively your computation of the mean. So you have to think about that before using mean and standard deviation as the basis for clustering. There are also other options for reducing the dimensionality of your data such as principal component analysis and independent component analysis. You might want to have a look at those too
    2. you want to have only one or two clusters. Actually, I am not sure if that is what you meant but that is what I understood from your comment. In any case, if your data is highly dimensional, you could try principal component analysis to reduce your data (or a random sample of your data) to two or three dimensions and plot these dimensions to visualize the number of possible clusters your data might have. Then you can use that number in the clustering algorithm you choose (either on the original data or the reduced data). Note that if you do clustering on the reduced data (after doing principal component analysis) the dimensions of your data are no longer equal to the dimensions from your original data (they are a combination of the original dimensions). Therefore, the analysis is somewhat tricky. But we can discuss that in another post. Other options for determining an appropriate number of clusters to look for is using some validity index (here, you would have to do a Google search on "cluster validity index" or get your textbook on cluster analysis).

    For us to help you better, it would be good to have a small sample of your data or at least a better description of their structure.

    Finally, my recommendation is to try the PDL. You can have a look at the online version of the book to better understand the potential of PDL. By the way, in PDL you can find the mean and standard deviation (together with the median,min, and maximum) with the functions "stats" (over the whole data) or "statsover" (by rows on your matrices). You can find a list of resources related to PDL in the tutorial on The Perl Data Language (PDL): A Quick Reference Guide. And just to give you an idea of how a PDL implementation of a clustering algorithm looks like, below is my implementation of the fuzzy c-means using PDL.

    Please, let us know if you have another question




    fixed copying of partition matrix

Re: module for cluster analysis
by sgt (Deacon) on Dec 25, 2006 at 13:49 UTC

    Besides checking (as other threads have pointed out) CPAN search (, with keywords like 'statistics, deviation etc', another idea springs to mind as you mention that your arrays are huge.

    If you build your array piecewise (one element at a time o in chunks) you could calculate at the same time the statistiscal quantities your are interested in. Each time you update the array you calculate the new quantities based on previous ones. For example:

    assert( $N > 0, $N_chunk > 0) # etc $av_new = 1/($N+$N_chunk) * ($av_old * $N + $av_chunk * $N_chunk)

    A simple "statistical array" class could be set up to package the prevous thing. Adding up two arrays would "add up" statistical properties. Instead of splicing a subarray to a main one, you could also allow treat your global array as a list of references, this way you would not spend too much time doing copies.

    By the way higher moment formulas follow simple recurrences, use "google" and wikipedia. hth --stephan

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://591547]
Approved by McDarren
Front-paged by moklevat
and the fire pops...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (7)
As of 2017-03-23 22:35 GMT
Find Nodes?
    Voting Booth?
    Should Pluto Get Its Planethood Back?

    Results (294 votes). Check out past polls.