c2c workflow

Mitchell Lyons

2017-07-21

What is c2c?

An R package for comparing two classifications or clustering solutions that have different structures - i.e. the two classifications have a different number of classes, or one classification has soft membership and one classification has hard membership. You can create a confusion matrix (error matrix) and then calculate various metrics to assess how the clusters compare to each other. The calculations are simple, but provide a handy tool for users unfamiliar with matrix multiplication. The helper functions also help you to do things like make a soft classification into a hard one, or turn a set of class labels into a binary classification matrix.

How to use c2c

The basic premise is that you already have two (or more perhaps) classifications that you would like compare - these could be from a clustering algorithm, extracted from a remote sensing map, a set of classes assigned manually etc. There already exist a number of tools and packages to calculate cluster diagnostics or accuracy metrics, but they are usually focused on comparing clustering solutions that are hard (i.e. each observation has only one class) and have the same number of classes (e.g. clustering solution vs. the ‘truth’). c2c is designed to allow you to compare classifications that to not fit into this scenario. The motivating problem was the need to compare a probabilistic clustering of vegetation data to an existing hard classification (which had a hierarchy with of numbers of classes) of that data, without losing the probabilistic component that the clustering algorithm produces.

An example with the iris data

In this vignette we will work through a simple, but hopefully useful, example using the iris data set. We will use a fuzzy clustering algorithm from the e1071 package.

library(c2c)
library(e1071)

Load the iris data set, and prep for clustering

data(iris)
iris_dat <- iris[,-5]

Let’s start with a cluster analysis with 3 groups, since we know that’s where we’re headed, and extract the soft classification matrix

fcm3 <- cmeans(x = iris_dat, centers = 3)
fcm3_probs <- fcm3$membership

Now we want to compare that soft matrix to a set of hard labels; we’ll use the species names. get_conf_mat produces the confusion matrix, and it take two inputs - they can be a matrix or a set of labels

get_conf_mat(fcm3_probs, iris$Species)
##      setosa versicolor virginica
## 1 48.224994   2.641100  1.062747
## 2  1.205211  39.685901 13.098060
## 3  0.569795   7.672999 35.839193

The output confusion matrix shows us the number of shared sites between our clustering solution and the set of labels (species in this case), accounting for the probabalistic memberships. We can see here that our 3 clusters have very clear fidelity to the species. We can also see what the relationship is like if we degrade the clustering to hard labels (this is the case of a traditional error matrix/accuracy assessment)

get_conf_mat(fcm3_probs, iris$Species, make.A.hard = TRUE)
##   setosa versicolor virginica
## 1     50          0         0
## 2      0         47        13
## 3      0          3        37

Nice, a little confusion between versicolor and virginica. Let’s try more clusters and see if we can tease it apart

fcm6 <- cmeans(x = iris_dat, centers = 10)
fcm6_probs <- fcm6$membership
get_conf_mat(fcm6_probs, iris$Species)
##         setosa versicolor  virginica
## 1   0.34547022 14.7329121  2.0333764
## 2  11.95170415  0.4940612  0.2439609
## 3   0.24319519 12.3037587  3.4208833
## 4   0.16077005  2.7168101 12.1892522
## 5   0.13486843  1.4033309 12.4686901
## 6  19.52093247  0.5342928  0.2399384
## 7   0.09675357  0.5975143  7.7687284
## 8  16.88110172  0.5220052  0.2253488
## 9   0.45591999 10.5286027  1.1539989
## 10  0.20928421  6.1667121 10.2558224
get_conf_mat(fcm6_probs, iris$Species, make.A.hard = TRUE)
##    setosa versicolor virginica
## 1       0         15         1
## 2      13          0         0
## 3       0         18         0
## 4       0          0        13
## 5       0          0        14
## 6      19          0         0
## 7       0          0         9
## 8      18          0         0
## 9       0         13         0
## 10      0          4        13

Cleans things up somewhat, but note the uncertainty is hidden when you compare hard clustering. As an aside, when you set make.A.hard = TRUE, the function get_hard is being used, it might be useful elsewhere. Similarly, when you pass a vector of labels to get_conf_mat the function labels_to_matrix makes the binary classification matrix.

head(get_hard(fcm3_probs))
##      1 2 3
## [1,] 1 0 0
## [2,] 1 0 0
## [3,] 1 0 0
## [4,] 1 0 0
## [5,] 1 0 0
## [6,] 1 0 0
head(labels_to_matrix(iris$Species))
##   setosa versicolor virginica
## 1      1          0         0
## 2      1          0         0
## 3      1          0         0
## 4      1          0         0
## 5      1          0         0
## 6      1          0         0

You can also compare two soft matrices, for example were could compare the 3- and 10-class classifications we just made

get_conf_mat(fcm3_probs, fcm6_probs)
##           1          2          3          4         5          6
## 1  0.908109 11.3763939  0.7155253  0.4552394  0.353112 19.1789402
## 2 14.054799  0.8733600 10.3090321  4.2496819  2.377165  0.7526885
## 3  2.148851  0.4399723  4.9432798 10.3619111 11.276613  0.3635350
##           7          8        9        10
## 1 0.3640385 16.3597210 1.545821 0.6719413
## 2 1.6359329  0.8617612 8.918665 9.9560866
## 3 6.4630248  0.4069735 1.674036 6.0037908

or we could directly compare two vectors of labels, which is a different way of doing what we already did above.

get_conf_mat(fcm3$cluster, iris$Species)
##   setosa versicolor virginica
## 1     50          0         0
## 2      0         47        13
## 3      0          3        37

Examining the confusion matrix can be enlightening just by itself, but it can be useful to have some more quantitative metrics, particularly if you’re comparing lots of classifications. For exmaple you may be trying to optimise clustering parameters or maybe you’re comparing lots of different clustering solutions. calculate_clustering_metrics does this

conf_mat <- get_conf_mat(fcm3_probs, iris$Species)
calculate_clustering_metrics(conf_mat)
## Percentage agreement WILL be calculated: it will only make sense if the confusion matrix diagonal corresponds to matching classes (i.e. rows and columns are in the same class order)
## $percentage_agreement
## [1] 0.8250006
## 
## $overall_purity
## [1] 0.8250006
## 
## $class_purity
## $class_purity$row_purity
##         1         2         3 
## 0.9286746 0.7350715 0.8130122 
## 
## $class_purity$col_purity
##     setosa versicolor  virginica 
##  0.9644999  0.7937180  0.7167839 
## 
## 
## $overall_entropy
## [1] 0.4504386
## 
## $class_entropy
## $class_entropy$row_entropy
##         1         2         3 
## 0.4325273 0.9445730 0.7629396 
## 
## $class_entropy$col_entropy
##     setosa versicolor  virginica 
##  0.2534101  0.9036221  0.9686891

Purity and entropy are as defined in Manning et al. (2008). Overall and per-class metrics are included, as both have uses in different situations. See Lyons et al. (2017) and Foster et al. (2017) for use on a model-based vegetation clustering example. Finally, not the message there about percentage agreement - as it says, only use it if the clustering solutions have the same class order, or are numbers for example, which should stay in order. For a decent classification, it shouldn’t differ much from purity anyway.

References

Foster, Hill and Lyons (2017). “Ecological Grouping of Survey Sites when Sampling Artefacts are Present”. Journal of the Royal Statistical Society: Series C (Applied Statistics). DOI: http://dx.doi.org/10.1111/rssc.12211

Lyons, Foster and Keith (2017). Simultaneous vegetation classification and mapping at large spatial scales. Journal of Biogeography.

Manning, Raghavan and Schütze (2008). Introduction to information retrieval (Vol. 1, No. 1). Cambridge: Cambridge university press.