Algorithm::KMeans - Clustering multi-dimensional data with a pure-Perl implementation
use Algorithm::KMeans;
# First name the data file:
my $datafile = "mydatafile.dat";
# Next, set the mask to indicate which columns of the datafile to use for # clustering and which column contains a symbolic ID for each data record. For # example, if the symbolic name is in the first column, you want the second column # to be ignored, and you want the next three columns to be used for 3D clustering:
my $mask = "N0111";
# Now construct an instance of the clusterer. The parameter K controls the number # of clusters. If you know how many clusters you want (let's say 3), call
my $clusterer = Algorithm::KMeans->new( datafile => $datafile, mask => $mask, K => 3, cluster_seeding => 'smart', terminal_output => 1, debug => 0, ); # Note the choice for cluster_seeding. The choice 'smart' means that the clusterer # will (1) subject the data to principal components analysis to determine the maximum # variance direction; (2) project the data onto this direction; (3) find peaks in # a smoothed histogram of the projected points; and (4) use the locations of # the highest peaks as seeds for cluster centers. The other value for the # "cluster_seeding" option is 'random'. If the 'smart' option produces bizarre # results, try 'random'. The default is 'smart'.
# If you believe that the individual clusters in your data are not isotropic # (that is, you believe the variances within each cluster are significantly different # along the different dimensions), you may wish for the clusterer to first normalize # the data along each dimension with an estimate for the standard-deviations along # that dimension and then carry out clustering. What estimate to use for such standard # deviations obviously becomes an issue unto itself. In the current implementation, # we use overall data standard-deviation along each dimension as the estimate. # BUT BEWARE THAT IF THE DATA VARIANCE IS CAUSED MORE BY THE SEPARATION BETWEEN THE # MEANS THAN BY THE INTRA-CLUSTER VARIABILITY, THE DATA NORMALIZATION BY THE STANDARD # DEVIATIONS COULD ACTUALLY DECREASE THE PERFORMANCE OF THE CLUSTERER. Here is an # example call to the constructor for turning on the data normalization:
my $clusterer = Algorithm::KMeans->new( datafile => $datafile, mask => $mask, K => 3, terminal_output => 1, do_variance_normalization => 1, );
# Set K to 0 if you want the module to figure out the optimum number of clusters # from the data. (It is best to run this option with the terminal_output set to # 1 so that you can see the different value of QoC for the different K):
my $clusterer = Algorithm::KMeans->new( datafile => $datafile, mask => $mask, K => 0, terminal_output => 1, );
# Although not shown above, you can obviously set the 'do_variance_normalization' # flag here also if you wish.
# For very large data files, setting K to 0 will result in searching through # too many values for K. For such cases, you can range limit the values of K to # search through by
my $clusterer = Algorithm::KMeans->new( datafile => $datafile, mask => "N111", Kmin => 3, Kmax => 10, terminal_output => 1, );
# Use the following call if you wish for the clusters to be written out to files. # Each cluster will be deposited in a file named 'ClusterX.dat' with X starting # from 0:
my $clusterer = Algorithm::KMeans->new( datafile => $datafile, mask => $mask, K => $K, write_clusters_to_files => 1, );
# FOR ALL CASES ABOVE, YOU'D NEED TO MAKE THE FOLLOWING CALLS ON THE CLUSTERER # INSTANCE TO ACTUALLY CLUSTER THE DATA:
$clusterer->read_data_from_file(); $clusterer->kmeans();
# If you want to directly access the clusters and the cluster centers in your # top-level script:
my ($clusters, $cluster_centers) = $clusterer->kmeans();
# You can now access the symbolic data names in the clusters directly, as in:
foreach my $cluster (@$clusters) { print "Cluster: @$cluster\n\n" }
# CLUSTER VISUALIZATION:
# You must first set the mask for cluster visualization. This mask tells the # module which 2D or 3D subspace of the original data space you wish to visualize # the clusters in:
my $visualization_mask = "111"; $clusterer->visualize_clusters($visualization_mask);
# SYNTHETIC DATA GENERATION:
# The module has been provided with a class method for generating multivariate # data for experimenting with clustering. The data generation is controlled # by the contents of the parameter file that is supplied as an argument to the # data generator method. The mean and covariance matrix entries in the parameter # file must be according to the syntax shown in the param.txt file in the examples # directory. It is best to edit this file as needed:
my $parameter_file = "param.txt"; my $out_datafile = "mydatafile.dat"; Algorithm::KMeans->cluster_data_generator( input_parameter_file => $parameter_file, output_datafile => $out_datafile, number_data_points_per_cluster => $N );
Version 1.40 includes a smart
option for seeding the
clusters. This option, supplied through the constructor
parameter cluster_seeding
, means that the clusterer will
(1) Subject the data to principal components analysis in
order to determine the maximum variance direction; (2)
Project the data onto this direction; (3) Find peaks in a
smoothed histogram of the projected points; and (4) Use the
locations of the highest peaks as initial guesses for the
cluster centers. If you don't want to use this option, set
cluster_seeding
to random
. That should work as in the
previous version of the module.
Version 1.30 includes a bug fix for the case when the
datafile contains empty lines, that is, lines with no data
records. Another bug fix in Version 1.30 deals with the
case when you want the module to figure out how many
clusters to form (this is the K=0
option in the
constructor call) and the number of data records is close to
the minimum.
Version 1.21 includes fixes to handle the possibility that,
when clustering the data for a fixed number of clusters, a
cluster may become empty during iterative calculation of
cluster assignments of the data elements and the updating of
the cluster centers. The code changes are in the
assign_data_to_clusters()
and update_cluster_centers()
subroutines.
Version 1.20 includes an option to normalize the data with respect to its variability along the different coordinates before clustering is carried out. This can be a useful option for highly non-isotropic data, that is, the data in which the different coordinate values along the different dimensions vary differently. (BUT BEWARE THAT IF THE OVERALL DATA VARIANCE ALONG A DIMENSION IS CAUSED MORE BY THE SEPARATION BETWEEN THE MEANS THAN BY THE INTRA-CLUSTER VARIABILITY, THE DATA NORMALIZATION OF THE SORT IN VERSION 1.20 COULD ACTUALLY DECREASE THE PERFORMANCE OF THE CLUSTERER.) With version 1.20, you can also visualize the raw data and the normed data to see the effects of data normalization. Another reason for Version 1.20 is to get away from multi-part version numbers like 1.x.x. As I discovered (thanks to an email from Steffen Mueller), it is never a good idea to mix version numbers like 1.1, which look like regular floating-point numbers to Perl, and multi-part version numbers like 1.1.1 (which Perl interprets as 1.001001).
Version 1.1.1 allows for range limiting the values of K
to search through. K
stands for the number of clusters
to form. This version also declares the module dependencies
in the Makefile.PL
file.
Version 1.1 is a an object-oriented version of the implementation presented in version 1.0. The current version should lend itself more easily to code extension. You could, for example, create your own class by subclassing from the class presented here and, in your subclass, use your own criteria for the similarity distance between the data points and for the QoC (Quality of Clustering) metric, and, possibly a different rule to stop the iterations. Version 1.1 also allows you to directly access the clusters formed and the cluster centers in your calling script.
Algorithm::KMeans is a perl5 module for the clustering of numerical data in multidimensional spaces. Since the module is entirely in Perl (in the sense that it is not a Perl wrapper around a C library that actually does the clustering), the code in the module can easily be modified to experiment with several aspects of automatic clustering. For example, one can change the criterion used to measure the "distance" between two data points, the stopping condition for accepting final clusters, the criterion used for measuring the quality of the clustering achieved, etc.
A K-Means clusterer is a poor man's implementation of the EM
algorithm. EM stands for Expectation Maximization. For the
case of isotropic Gaussian data, the results obtained with a
good K-Means implementation should match those obtained with
the EM algorithm. (When the data is non-isotropic but the
nature of anisotropy is the same for all the clusters, the
results you obtain with a K-Means clusterer may be improved
--- but only under certain circumstances --- by first
normalizing the data appropriately, as can done with the
implementation shown here when you set the
do_variance_normalization
option in the KMeans
constructor. But, as pointed out elsewhere in this
documentation, such normalization may actually decrease the
performance of the clusterer if the overall data variability
along any dimension is more a result of the separation
between the means than a consequence of intra-cluster
variability.) Clustering with K-Means takes place
iteratively and involves two steps: 1) assignment of data
samples to clusters; and 2) Recalculation of the cluster
centers. The assignment step can be shown to be akin to the
Expectation step of the EM algorithm, and the calculation of
the cluster centers akin to the Maximization step of the EM
algorithm.
Of the two key steps of the K-Means algorithm, the assignment step consists of assigning each data point to that cluster from whose center the data point is the closest. That is, during assignment, you compute the distance between the data point and each of the current cluster centers. You assign the data sample on the basis of the minimum value of the computed distance. The second step consists of re-computing the cluster centers for the newly modified clusters.
Obviously, before the two-step approach can proceed, we need
to initialize the both the cluster center values and the
clusters that can then be iteratively modified by the
two-step algorithm. How this initialization is carried out
is very important. Starting with Version 1.40, you now have
two very different ways for carrying out this
initialization. The default option, called the smart
option, consists of subjecting the data to principal
components analysis to discover the direction of maximum
variance in the data space. The data points are then
projected on to this direction and a histogram constructed
from the projections. Centers of the smoothed histogram are
used to seed the clustering operation. The other option,
which is the older option, is to choose the cluster centers
purely randomly. You get the first option if you set
cluster_seeding
to smart
in the constructor, and you get
the second option if you set it to random
.
How to specify K is one of the most vexing issues in any approach to clustering. In some case, we can set K on the basis of prior knowledge. But, more often than not, no such prior knowledge is available. When the programmer does not explicitly specify a value for K, the approach taken in the current implementation is to try all possible values between 2 and some largest possible value that makes statistical sense. We then choose that value for K which yields the best value for the QoC (Quality of Clustering) metric. It is generally believed that the largest value for K should not exceed sqrt(N/2) where N is the number of data point to be clustered.
How to set the QoC metric is obviously a critical issue unto itself. In the current implementation, the value of QoC is a ratio of the average radius of the clusters and the average distance between the cluster centers. But note that this is a good criterion only when the data exhibits the same variance in all directions. When the data variance is different directions, but still remains the same for all clusters, a more appropriate QoC can be formulated using other distance metrics such as the Mahalanobis distance.
Every iterative algorithm requires a stopping criterion. The criterion implemented here is that we stop iterations when there is no re-assignment of the data points during the assignment step.
Ordinarily, the output produced by a K-Means clusterer will
correspond to a local minimum for the QoC values, as opposed
to a global minimum. The current implementation protects
against that when the clusterer constructor is called with
the random
option for cluster_seeding
, but only in a
very small way, by trying different randomly selected
initial cluster centers and then selecting the one that
gives the best overall QoC value.
The module provides the following methods for clustering, for cluster visualization, for data visualization, and for the generation of data for testing a clustering algorithm:
my $clusterer = Algorithm::KMeans->new(datafile => $datafile, mask => $mask, K => $K, cluster_seeding => 'smart', terminal_output => 1, write_clusters_to_files => 1, debug => 0, );
A call to new()
constructs a new instance of the
Algorithm::KMeans
class. When $K
is a non-zero
positive integer, the module will construct exactly that
many clusters. However, when $K
is 0, the module will
find the best number of clusters to partition the data into.
As explained in the Description, setting cluster_seeding
to
smart
causes PCA (principal components analysis) to be
used for discovering the best choices for the initial
cluster centers. If you want purely random decisions to be
made for the initial choices for the cluster centers, set
cluster_seeding
to random
.
The data file is expected to contain entries in the following format
c20 0 10.7087017086940 9.63528386251712 10.9512155258108 ... c7 0 12.8025925026787 10.6126270065785 10.5228482095349 ... b9 0 7.60118206283120 5.05889245193079 5.82841781759102 ... .... ....
where the first column contains the symbolic ID tag for each
data record and the rest of the columns the numerical
information. As to which columns are actually used for
clustering is decided by the string value of the mask. For
example, if we wanted to cluster on the basis of the entries
in just the 3rd, the 4th, and the 5th columns above, the
mask value would be N0111
where the character N
indicates that the ID tag is in the first column, the
character 0
that the second column is to be ignored, and
the 1
's that follow that the 3rd, the 4th, and the 5th
columns are to be used for clustering.
The parameter terminal_output
is boolean; when not
supplied in the call to new()
it defaults to 0. When set,
this parameter determines what you will see on the terminal
screen of the window in which you make these method calls.
When set to 1, you will see on the terminal screen the
different clusters as lists of the symbolic IDs and their
cluster centers. You will also see the QoC (Quality of
Clustering) value for the clusters displayed.
The parameter write_clusters_to_files
is boolean; when
not supplied in the call to new()
, it defaults to 0. When
set to 1, the clusters are written out to files named
Cluster0.dat Cluster1.dat Cluster2.dat ... ...
Before the clusters are written to these files, the module destroys all files with such names in the directory in which you call the module.
If you wish for the clusterer to search through a
(Kmin,Kmax)
range of values for K
, the constructor
should be called in the following fashion:
my $clusterer = Algorithm::KMeans->new(datafile => $datafile, mask => $mask, Kmin => 3, Kmax => 10, cluster_seeding => 'smart', terminal_output => 1, debug => 0, );
where obviously you can choose any reasonable values for
Kmin
and Kmax
. If you choose a value for Kmax
that
is statistically too large, the module will let you
know. Again, you may choose random
for
cluster_seeding
, the default value being smart
.
If you believe that the individual clusters in your data are
very anisotropic (that is, you believe that intra-cluster
variability in your data is different along the different
dimensions), you might get better clustering by first
normalizing the data coordinates by the standard-deviations
along those directions. But how to use a reasonable value
for such a standard-deviation becomes a big issue unto
itself. (The implementation shown here uses the overall
data standard-deviation along a direction for the
normalization in that direction. As mentioned elsewhere in
the documentation, such a normalization could backfire on
you if the data variability along a dimension is more a
result of the separation between the means than a
consequence of the intra-cluster variability.) You can turn
on the data normalization by turning on the
do_variance_normalization
option in the constructor, as
in
my $clusterer = Algorithm::KMeans->new( datafile => $datafile, mask => "N111", K => 2, terminal_output => 1, do_variance_normalization => 1, );
$clusterer->read_data_from_file()
$clusterer->kmeans();
or
my ($clusters, $cluster_centers) = $clusterer->kmeans();
The first call above works solely by side-effect. The second call also returns the clusters and the cluster centers.
$clusterer->get_K_best();
This call makes sense only if you supply either the K=0
option to the constructor, or you specify values for the
Kmin
and Kmax
options. The K=0
and the
(Kmin,Kmax)
options cause the KMeans algorithm to figure
out on its own the best value for K
. Remember, K
is the
number of clusters the data is partitioned into.
$clusterer->show_QoC_values();
presents a table with K
values in the left column and the
corresponding QoC (Quality-of-Clustering) values in the
right column. Note that this call makes sense only if you
either supply the K=0
option to the constructor, or you
specify values for the Kmin
and Kmax
options.
$clusterer->visualize_clusters( $visualization_mask )
The visualization mask here does not have to be identical to the one used for clustering, but must be a subset of that mask. This is convenient for visualizing the clusters in two- or three-dimensional subspaces of the original space.
$clusterer->visualize_data($visualization_mask, 'original');
$clusterer->visualize_data($visualization_mask, 'normed');
This method requires a second argument and, as shown, it
must be either the string original
or the string
normed
, the former for the visualization of the raw
data and the latter for the visualization of the data after
its different dimensions are normalized by the
standard-deviations along those directions. If you call the
method with the second argument set to normed
, but do
so without turning on the do_variance_normalization
option in the KMeans constructor, it will let you know.
Algorithm::KMeans->cluster_data_generator( input_parameter_file => $parameter_file, output_datafile => $out_datafile, number_data_points_per_cluster => 20 );
for generating multivariate data for clustering if you wish
to play with synthetic data for clustering. The input
parameter file contains the means and the variances for the
different Gaussians you wish to use for the synthetic data.
See the file param.txt
provided in the examples
directory. It will be easiest for you to just edit this
file for your data generation needs. In addition to the
format of the parameter file, the main constraint you need
to observe in specifying the parameters is that the
dimensionality of the covariance matrix must correspond to
the dimensionality of the mean vectors. The multivariate
random numbers are generated by calling the Math::Random
module. As you would expect, this module requires that the
covariance matrices you specify in your parameter file be
symmetric and positive definite. Should the covariances in
your parameter file not obey this condition, the
Math::Random
module will let you know.
When the option terminal_output
is set in the call to the
constructor, the clusters are displayed on the terminal
screen.
When the option write_clusters_to_files
is set in the
call to the constructor, the module dumps the clusters in
files named
Cluster0.dat Cluster1.dat Cluster2.dat ... ...
in the directory in which you execute the module. The number of such files will equal the number of clusters formed. All such existing files in the directory are destroyed before any fresh ones are created. Each cluster file contains the symbolic ID tags of the data points in that cluster.
This module requires the following three modules:
Math::Random Graphics::GnuplotIF Math::GSL
the first for generating the multivariate random numbers,
the second for the visualization of the clusters, and the
last for access to the Perl wrappers for the GNU Scientific
Library. The Matrix
module of this library is used for
the PCA of the data when clustering is done with the
smart
mode for cluster seeding.
See the examples directory in the distribution for how to make calls to the clustering and the visualization methods. The examples directory also includes a parameter file, param.txt, for generating synthetic data for clustering. Just edit this file if you would like to generate your own multivariate data for clustering. The parameter file is for the 3D case, but you can generate data with any dimensionality through appropriate entries in the parameter file.
None by design.
Please note that this clustering module is not meant for very large datafiles. Being an all-Perl implementation, the goal here is not the speed of execution. On the contrary, the goal is to make it easy to experiment with the different facets of K-Means clustering. If you need to process a large data file, you'd be better off with a module like Algorithm::Cluster. However note that when you use a wrapper module in which it is a C library that is actually doing the job of clustering for you, it is more difficult to experiment with the various aspects of clustering. At the least, you have to recompile the code for every change you make to the source code of a low-level library. You are spared that frustration with an all-Perl implementation.
Clustering usually does not work well when the data is highly anisotropic, that is, when the data has very different variances along its different dimensions. This problem becomes particularly severe when the different clusters you expect to see in the data have non-uniform anisotropies. When the anisotropies are uniform, one can try to improve the performance of a clusterer by first normalizing the data coordinates along a direction by an average of the intra-cluster standard-deviations along that direction. But how to obtain even a rough estimate of such standard deviations leads you to chicken-and-egg sort of problems. The current implementation takes the low road and, when you turn on the data normalization in the KMeans constructor, normalizes each data coordinate value by the overall data standard deviation along that direction. However, as described elsewhere, this may actually reduce the performance of the clusterer if the data variability along a direction is more a result of the separation between the means than because of intra-cluster variability. For better clustering, one could also try to cluster the data in a low-dimensional space formed by a principal components analysis of the data. Depending on how the current module is received, its future versions may include that enhancement.
Please notify the author if you encounter any bugs. When sending email, please place the string 'KMeans' in the subject line.
The usual
perl Makefile.PL make make test make install
if you have root access. If not,
perl Makefile.PL prefix=/some/other/directory/ make make test make install
It was an email from Nadeem Bulsara that prompted me to
create Version 1.40 of this module. Working with Version
1.30, Nadeem noticed that occasionally the module would
produce variable clustering results on the same dataset. I
believe that this variability was caused (at least partly)
by the purely random mode that was used in Version 1.30 for
the seeding of the cluster centers. Version 1.40 now
includes a smart
mode. With the new mode the clusterer
uses a PCA (Principal Components Analysis) of the data to
make good guesses for the cluster centers. However,
depending on how the data is jumbled up, it is possible that
the new mode will not produce uniformly good results in all
cases. So you can still use the old mode by setting
cluster_seeding
to random
in the constructor.
Thanks Nadeem for your feedback!
Version 1.30 resulted from Martin Kalin reporting problems with a very small data set. Thanks Martin!
Version 1.21 came about in response to the problems encountered by Luis Fernando D'Haro with version 1.20. Although the module would yield the clusters for some of its runs, more frequently than not the module would abort with an "empty cluster" message for his data. Luis Fernando has also suggested other improvements (such as clustering directly from the contents of a hash) that I intend to make in future versions of this module. Thanks Luis Fernando.
Chad Aeschliman was kind enough to test out the interface of this module and to give suggestions for its improvement. His key slogan: "If you cannot figure out how to use a module in under 10 minutes, it's not going to be used." That should explain the longish Synopsis included here.
Avinash Kak, kak@purdue.edu
If you send email, please place the string "KMeans" in your subject line to get past my spam filter.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Copyright 2012 Avinash Kak