Cluster analysis: Difference between revisions

From CEOpedia | Management online
No edit summary
No edit summary
Line 15: Line 15:
* '''K-Means:''' The k-Means method is an iterative algorithm. With each iteration step, the cluster centers are newly determined and the similarity of individual data points to the cluster center is reflected by the Euclidean distance. A data point is assigned to a cluster if the Euclidean distance to it is the smallest. This machine learning algorithm is quite simple, but the number of clusters must be determined in advance. A major drawback of this algorithm is also that it is very sensitive to outliers.
* '''K-Means:''' The k-Means method is an iterative algorithm. With each iteration step, the cluster centers are newly determined and the similarity of individual data points to the cluster center is reflected by the Euclidean distance. A data point is assigned to a cluster if the Euclidean distance to it is the smallest. This machine learning algorithm is quite simple, but the number of clusters must be determined in advance. A major drawback of this algorithm is also that it is very sensitive to outliers.
* '''Hierarchical Cluster Analysis:''' This machine learning method is based on distance measures. A distinction is made between the divisive clustering methods and the agglomerative methods. The divisive procedures belong to the top-down procedures, in which initially all objects of the data set belong to a cluster. Then, step by step, more and more clusters are formed. The agglomerative methods, on the other hand, follow the opposite approach (bottom-up methods). Each object first forms its own cluster, and they are merged step by step until all objects belong to one cluster. Once formed, clusters can then no longer be changed. However, how to partition depends on the user. This is beside the complex computation the largest disadvantage of these methods. However, it is not necessary to know the number of clusters beforehand.  
* '''Hierarchical Cluster Analysis:''' This machine learning method is based on distance measures. A distinction is made between the divisive clustering methods and the agglomerative methods. The divisive procedures belong to the top-down procedures, in which initially all objects of the data set belong to a cluster. Then, step by step, more and more clusters are formed. The agglomerative methods, on the other hand, follow the opposite approach (bottom-up methods). Each object first forms its own cluster, and they are merged step by step until all objects belong to one cluster. Once formed, clusters can then no longer be changed. However, how to partition depends on the user. This is beside the complex computation the largest disadvantage of these methods. However, it is not necessary to know the number of clusters beforehand.  
* '''Two-stage clustering:''' Two-stage clustering is the most complex, as it combines both of the aforementioned methods. First, a hierarchical procedure is used to determine the number of clusters. This can be considered as an initialization step. Also, an initial clustering is provided. Subsequently, the k-Means procedure uses this information, builds on it and improves the results.


==Applications of the cluster analysis==
==Applications of the cluster analysis==

Revision as of 14:51, 22 November 2022

Cluster analysis is an explorative procedure to divide data sets into groups with regard to their similarity. Various criteria and characteristics can be used for cluster analysis, on the basis of which the similarity of the individual data is determined. A cluster analysis is based on the calculation of a similarity measure and belong to the unsupervised machine learning methods[1].

Prerequisites of the cluster analysis

A cluster should be maximally homogeneous within itself and clearly distinguishable from other clusters. A clear demarcation must be ensured. Therefore, the following conditions should be met[2][3]:

  • Size of the data set: Under certain circumstances, a meaningful result can only be achieved with a sufficiently large data set. Depending on the task, it is therefore necessary to weigh up whether the amount of data is sufficient.
  • Normalization of the data: if there are large differences in the value range of the data, the data should be normalized beforehand.
  • Elimination of outliers: outliers can strongly distort the results. Thus, the data should first be analyzed and evaluated for possible extreme values and outliers should then be eliminated.
  • Bias: If there are strong correlations between the data, the results could end up being heavily biased. This must be avoided.

Procedure of a cluster analysis

In a first step, the determination of characteristics or corresponding similarities takes place. Next, you should select an algorithm that you will use to analyze your data and thus lay the foundation for the formation of clusters. Thirdly, the determination of the number of clusters takes place as well as the formation of the respective clusters. Here, the data is assigned on the basis of segmentation criteria. For the grouping to take place, not only the number of groups must be evaluated, but also a similar cluster size for all your identified clusters[4].

Cluster analysis methods

There are numerous algorithms for dividing data into clusters. Which method is most suitable generally depends on the question. Often, the results of different methods are compared at the end to determine the correct method. The best known methods are[5]:

  • K-Means: The k-Means method is an iterative algorithm. With each iteration step, the cluster centers are newly determined and the similarity of individual data points to the cluster center is reflected by the Euclidean distance. A data point is assigned to a cluster if the Euclidean distance to it is the smallest. This machine learning algorithm is quite simple, but the number of clusters must be determined in advance. A major drawback of this algorithm is also that it is very sensitive to outliers.
  • Hierarchical Cluster Analysis: This machine learning method is based on distance measures. A distinction is made between the divisive clustering methods and the agglomerative methods. The divisive procedures belong to the top-down procedures, in which initially all objects of the data set belong to a cluster. Then, step by step, more and more clusters are formed. The agglomerative methods, on the other hand, follow the opposite approach (bottom-up methods). Each object first forms its own cluster, and they are merged step by step until all objects belong to one cluster. Once formed, clusters can then no longer be changed. However, how to partition depends on the user. This is beside the complex computation the largest disadvantage of these methods. However, it is not necessary to know the number of clusters beforehand.

Applications of the cluster analysis

Cluster analysis has become a common means of grouping data in a wide variety of fields[6]:

  • Marketing: Analyzing customers and sorting them into the right target groups can be an enormous competitive advantage in marketing. Cluster analyses are used here to identify similar customers from the entire customer base and to develop individual advertising strategies for these customers.
  • Medicine and psychology: Behavioral patterns or disease patterns can also be grouped into clusters. Suitable therapies can then be developed on this basis.

Footnotes

  1. Everitt, Landau, Leese, Stahl, 2011, pp. 2-8.
  2. Aggarwal, Reddy, 2014, pp. 577-583.
  3. Aggarwal, Reddy, 2014, p. 124.
  4. Tian, Xu, 2015, pp. 166.
  5. Aggarwal, Reddy, 2014, pp. 89-105.
  6. Everitt, Landau, Leese, Stahl, 2011, pp. 9-13.

References

Author: Max Bachmann