Cronbach Alpha

From CEOpedia | Management online
Revision as of 21:04, 13 December 2019 by Ceopediabot (talk | contribs) (Typos, typos fixed: a one → one, a adjust → an adjust)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Cronbach Alpha
See also

Cronbach Alpha is one of the most favored methods for measuring the internal consistency reliability of an aggregation of items. Also known as Cronbach's Alpha coefficient or Cronbach's α(Research Methods and Design in Sport Management). It was described for the first time by Lee Cronbach in 1951.

According to Damon P. S. Andrew, Paul Mark Pedersen, Chad D. McEvoy : "Cronbach's alpha measures how well a set of variables or items measures a single, latent construct. It is essentially a correlation between the item responses in a questionnaire; assuming the statistic is directed toward a group of items intended to measure the same construct. Cronbach's alpha values will be high when the correlations between the respective questionnaire items are high. Cronbach's alpha values range from 0 to 1, and in the social sciences, values at or above 0,7 are desirable, but values well above 0,9 may not be desirable as the scale is likely to be too narrow in focus (Nunnally & Bernstein 1994)."

Interpretting Cronbach's Alpha

"Cronbach's Alpha is an indicator of the internal consistency or homogeneity of a scale. Fundamentally, and perhaps a little simplistically, Cronbach's alpha tells you the extent to which all of the items on the test are "behaving" similarly. A low alpha suggests that there are errors in the selection of items to be included in the measure. If a measure has several subscales, alphas are calculated and reported for each of the individual subscales. It may or may not make sense to report alpha for the scale as a whole as well, depending upon the degree to which the scale as a whole is measuring the same phenomenon" (R. Tappen, 2011, p.131).

The formula from Carmines and Zeller (1979 P.44)

The Cronbach's is often presumed as a simplification of the Spearman-Brown prophecy formula; we measure the mean inter-item correlation (c̄) in order to estimate the degree of agreement amidst individual test items. Next thing we do is to anticipate the reliability coefficient for a n-item test from the correlations amidst all these single-item measures. In addition, other probable perception of the Cronbach's alpha is that it is, originally the average of all possible split half reliabilities (R.M Warner, 2008, p.854).

Failed to parse (syntax error): {\displaystyle α=\frac{nc̄}{1+c̄(n-1)}}

  • where "n" remains the number of items
  • where "c̄" is the average interitem correlation

(R.M Warner, 2008, p.854)

Cronbach's Alpha's use in health science

Cronbach's Alpha can be use in many different ways. One of the examples is in health science.

„Tests in and outside the health sciences vary enormously in their internal consistency and split-half reliability". Cronbach's alpha is often implemented to divulge these measurements and standards. Split-half reliability is used as an adjust the reliability betwixt the first and the second half of the test.

A brand-new evaluation of nursing competency in the operating theatre developed by researchers at the University of Melbourne presents an thought-provoking instance of utilization of a Cronbach's Alpha. It attain a surprisingly high Cronbach's alpha coefficient of item reliability. A Cronbach's Alpha score between-item of 0,94 was determined for this performance-based heading (S. Mckenzie, 2013, p.201).

References

Author: Jakub Winiarski