Cluster analysis
Cluster analysis is an explorative procedure to divide data sets into groups with regard to their similarity. Various criteria and characteristics can be used for cluster analysis, on the basis of which the similarity of the individual data is determined. A cluster analysis is based on the calculation of a similarity measure and belong to the unsupervised machine learning methods[1].
Prerequisites of the cluster analysis
A cluster should be maximally homogeneous within itself and clearly distinguishable from other clusters. A clear demarcation must be ensured. Therefore, the following conditions should be met[2][3]:
- Size of the data set: Under certain circumstances, a meaningful result can only be achieved with a sufficiently large data set. Depending on the task, it is therefore necessary to weigh up whether the amount of data is sufficient.
- Normalization of the data: if there are large differences in the value range of the data, the data should be normalized beforehand.
- Elimination of outliers: outliers can strongly distort the results. Thus, the data should first be analyzed and evaluated for possible extreme values and outliers should then be eliminated.
- Bias: If there are strong correlations between the data, the results could end up being heavily biased. This must be avoided.
Procedure of a cluster analysis
In a first step, the determination of characteristics or corresponding similarities takes place. Next, you should select an algorithm that you will use to analyze your data and thus lay the foundation for the formation of clusters. Thirdly, the determination of the number of clusters takes place as well as the formation of the respective clusters. Here, the data is assigned on the basis of segmentation criteria. For the grouping to take place, not only the number of groups must be evaluated, but also a similar cluster size for all your identified clusters[4].
Cluster analysis methods
There are numerous algorithms for dividing data into clusters. Which method is most suitable generally depends on the question. Often, the results of different methods are compared at the end to determine the correct method. The best known methods are[5]:
- K-Means: The k-Means method is an iterative algorithm. With each iteration step, the cluster centers are newly determined and the similarity of individual data points to the cluster center is reflected by the Euclidean distance. A data point is assigned to a cluster if the Euclidean distance to it is the smallest. This machine learning algorithm is quite simple, but the number of clusters must be determined in advance. A major drawback of this algorithm is also that it is very sensitive to outliers.
- Hierarchical Cluster Analysis: This machine learning method is based on distance measures. A distinction is made between the divisive clustering methods and the agglomerative methods. The divisive procedures belong to the top-down procedures, in which initially all objects of the data set belong to a cluster. Then, step by step, more and more clusters are formed. The agglomerative methods, on the other hand, follow the opposite approach (bottom-up methods). Each object first forms its own cluster, and they are merged step by step until all objects belong to one cluster. Once formed, clusters can then no longer be changed. However, how to partition depends on the user. This is beside the complex computation the largest disadvantage of these methods. However, it is not necessary to know the number of clusters beforehand.
Applications of the cluster analysis
Cluster analysis has become a common means of grouping data in a wide variety of fields[6]:
- Marketing: Analyzing customers and sorting them into the right target groups can be an enormous competitive advantage in marketing. Cluster analyses are used here to identify similar customers from the entire customer base and to develop individual advertising strategies for these customers.
- Medicine and psychology: Behavioral patterns or disease patterns can also be grouped into clusters. Suitable therapies can then be developed on this basis.
Footnotes
Cluster analysis — recommended articles |
Descriptive statistics — Mann-Whitney U test — Control limits — Systematic sampling techniques — Parametric analysis — Two-way ANOVA — Decision tree — CUSUM chart — Multiple regression analysis |
References
- Aggarwal, C. C., Reddy, C. K. (2014). Data Clustering. Algorithms and Applications, "Chapman & Hall".
- Everitt, B. S., Landau, S., Leese, M., Stahl, D. (2011). Cluster Analysis, 5th Edition, "Wiley Series in Propability and Statistics".
- Tian, Y., Xu, D. (2015). A Comprehensive Survey of Clustering Algorithms, "Annals of Data Science", 2(2), pp. 165-193.
Author: Max Bachmann