Sponsored Links
-->

Saturday, October 6, 2018

Week 9 : SPEARMAN'S RANK CORRELATION COEFFICIENT - YouTube
src: i.ytimg.com

In statistics, Spearman's rank correlation coefficient or Spearman's rho, named after Charles Spearman and often denoted by the Greek letter ? {\displaystyle \rho } (rho) or as r s {\displaystyle r_{s}} , is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). It assesses how well the relationship between two variables can be described using a monotonic function.

The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or -1 occurs when each of the variables is a perfect monotone function of the other.

Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of -1) rank between the two variables.

Spearman's coefficient is appropriate for both continuous and discrete ordinal variables. Both Spearman's ? {\displaystyle \rho } and Kendall's ? {\displaystyle \tau } can be formulated as special cases of a more general correlation coefficient.


Video Spearman's rank correlation coefficient



Definition and calculation

The Spearman correlation coefficient is defined as the Pearson correlation coefficient between the ranked variables.

For a sample of size n, the n raw scores X i , Y i {\displaystyle X_{i},Y_{i}} are converted to ranks rg X i , rg Y i {\displaystyle \operatorname {rg} X_{i},\operatorname {rg} Y_{i}} , and r s {\displaystyle r_{s}} is computed from:

r s = ? rg X , rg Y = cov ( rg X , rg Y ) ? rg X ? rg Y {\displaystyle r_{s}=\rho _{\operatorname {rg} _{X},\operatorname {rg} _{Y}}={\frac {\operatorname {cov} (\operatorname {rg} _{X},\operatorname {rg} _{Y})}{\sigma _{\operatorname {rg} _{X}}\sigma _{\operatorname {rg} _{Y}}}}}
where
  • ? {\displaystyle \rho } denotes the usual Pearson correlation coefficient, but applied to the rank variables.
  • cov ( rg X , rg Y ) {\displaystyle \operatorname {cov} (\operatorname {rg} _{X},\operatorname {rg} _{Y})} is the covariance of the rank variables.
  • ? rg X {\displaystyle \sigma _{\operatorname {rg} _{X}}} and ? rg Y {\displaystyle \sigma _{\operatorname {rg} _{Y}}} are the standard deviations of the rank variables.

Only if all n ranks are distinct integers, it can be computed using the popular formula

r s = 1 - 6 ? d i 2 n ( n 2 - 1 ) . {\displaystyle r_{s}={1-{\frac {6\sum d_{i}^{2}}{n(n^{2}-1)}}}.}
where
  • d i = rg ( X i ) - rg ( Y i ) {\displaystyle d_{i}=\operatorname {rg} (X_{i})-\operatorname {rg} (Y_{i})} , is the difference between the two ranks of each observation.
  • n is the number of observations

Identical values are usually each assigned fractional ranks equal to the average of their positions in the ascending order of the values, which is equivalent to averaging over all possible permutations.

If ties are present in the data set, the simplified formula above yields incorrect results: Only if in both variables all ranks are distinct, then ? rg X ? rg Y = Var rg X = Var rg Y = ( n 2 - 1 ) / 12 {\displaystyle \sigma _{\operatorname {rg} _{X}}\sigma _{\operatorname {rg} _{Y}}=\operatorname {Var} {\operatorname {rg} _{X}}=\operatorname {Var} {\operatorname {rg} _{Y}}=(n^{2}-1)/12} (Calculated according to biased variance.). The first equation--normalizing by the standard deviation--may even be used even when ranks are normalized to [0;1] ("relative ranks") because it is insensitive both to translation and linear scaling.

This method should also not be used in cases where the data set is truncated; that is, when the Spearman correlation coefficient is desired for the top X records (whether by pre-change rank or post-change rank, or both), the user should use the Pearson correlation coefficient formula given above.

The standard error of the coefficient (?) was determined by Pearson in 1907 and Gosset in 1920. It is

? r s = 0.6325 n - 1 {\displaystyle \sigma _{r_{s}}={\frac {0.6325}{\sqrt {n-1}}}}

Maps Spearman's rank correlation coefficient



Related quantities

There are several other numerical measures that quantify the extent of statistical dependence between pairs of observations. The most common of these is the Pearson product-moment correlation coefficient, which is a similar correlation method to Spearman's rank, that measures the "linear" relationships between the raw numbers rather than between their ranks.

An alternative name for the Spearman rank correlation is the "grade correlation"; in this, the "rank" of an observation is replaced by the "grade". In continuous distributions, the grade of an observation is, by convention, always one half less than the rank, and hence the grade and rank correlations are the same in this case. More generally, the "grade" of an observation is proportional to an estimate of the fraction of a population less than a given value, with the half-observation adjustment at observed values. Thus this corresponds to one possible treatment of tied ranks. While unusual, the term "grade correlation" is still in use.


GCSE Statistics: Spearman's Rank Correlation Coefficient Example ...
src: i.ytimg.com


Interpretation

The sign of the Spearman correlation indicates the direction of association between X (the independent variable) and Y (the dependent variable). If Y tends to increase when X increases, the Spearman correlation coefficient is positive. If Y tends to decrease when X increases, the Spearman correlation coefficient is negative. A Spearman correlation of zero indicates that there is no tendency for Y to either increase or decrease when X increases. The Spearman correlation increases in magnitude as X and Y become closer to being perfect monotone functions of each other. When X and Y are perfectly monotonically related, the Spearman correlation coefficient becomes 1. A perfect monotone increasing relationship implies that for any two pairs of data values Xi, Yi and Xj, Yj, that Xi - Xj and Yi - Yj always have the same sign. A perfect monotone decreasing relationship implies that these differences always have opposite signs.

The Spearman correlation coefficient is often described as being "nonparametric". This can have two meanings. First, a perfect Spearman correlation results when X and Y are related by any monotonic function. Contrast this with the Pearson correlation, which only gives a perfect value when X and Y are related by a linear function. The other sense in which the Spearman correlation is nonparametric in that its exact sampling distribution can be obtained without requiring knowledge (i.e., knowing the parameters) of the joint probability distribution of X and Y.


Spearman's rank correlation coefficient (rs) between HNA and the ...
src: www.researchgate.net


Example

In this example, the raw data in the table below is used to calculate the correlation between the IQ of a person with the number of hours spent in front of TV per week.

Firstly, evaluate d i 2 {\displaystyle d_{i}^{2}} . To do so use the following steps, reflected in the table below.

  1. Sort the data by the first column ( X i {\displaystyle X_{i}} ). Create a new column x i {\displaystyle x_{i}} and assign it the ranked values 1,2,3,...n.
  2. Next, sort the data by the second column ( Y i {\displaystyle Y_{i}} ). Create a fourth column y i {\displaystyle y_{i}} and similarly assign it the ranked values 1,2,3,...n.
  3. Create a fifth column d i {\displaystyle d_{i}} to hold the differences between the two rank columns ( x i {\displaystyle x_{i}} and y i {\displaystyle y_{i}} ).
  4. Create one final column d i 2 {\displaystyle d_{i}^{2}} to hold the value of column d i {\displaystyle d_{i}} squared.

With d i 2 {\displaystyle d_{i}^{2}} found, add them to find ? d i 2 = 194 {\displaystyle \sum d_{i}^{2}=194} . The value of n is 10. These values can now be substituted back into the equation: ? = 1 - 6 ? d i 2 n ( n 2 - 1 ) . {\displaystyle \rho =1-{\frac {6\sum d_{i}^{2}}{n(n^{2}-1)}}.} to give

? = 1 - 6 × 194 10 ( 10 2 - 1 ) {\displaystyle \rho =1-{\frac {6\times 194}{10(10^{2}-1)}}}

which evaluates to ? = -29/165 = -0.175757575... with a P-value = 0.627188 (using the t distribution).

This low value shows that the correlation between IQ and hours spent watching TV is very low, although the negative value suggests that the longer the time spent watching television the lower the IQ. In the case of ties in the original values, this formula should not be used; instead, the Pearson correlation coefficient should be calculated on the ranks (where ties are given ranks, as described above).


GCSE Statistics: Spearman's Rank Correlation Coefficient ...
src: i.ytimg.com


Determining significance

One approach to test whether an observed value of ? is significantly different from zero (r will always maintain -1 <= r <= 1) is to calculate the probability that it would be greater than or equal to the observed r, given the null hypothesis, by using a permutation test. An advantage of this approach is that it automatically takes into account the number of tied data values there are in the sample, and the way they are treated in computing the rank correlation.

Another approach parallels the use of the Fisher transformation in the case of the Pearson product-moment correlation coefficient. That is, confidence intervals and hypothesis tests relating to the population value ? can be carried out using the Fisher transformation:

F ( r ) = 1 2 ln 1 + r 1 - r = artanh ( r ) . {\displaystyle F(r)={1 \over 2}\ln {1+r \over 1-r}=\operatorname {artanh} (r).}

If F(r) is the Fisher transformation of r, the sample Spearman rank correlation coefficient, and n is the sample size, then

z = n - 3 1.06 F ( r ) {\displaystyle z={\sqrt {\frac {n-3}{1.06}}}F(r)}

is a z-score for r which approximately follows a standard normal distribution under the null hypothesis of statistical independence (? = 0).

One can also test for significance using

t = r n - 2 1 - r 2 {\displaystyle t=r{\sqrt {\frac {n-2}{1-r^{2}}}}}

which is distributed approximately as Student's t distribution with n - 2 degrees of freedom under the null hypothesis. A justification for this result relies on a permutation argument.

A generalization of the Spearman coefficient is useful in the situation where there are three or more conditions, a number of subjects are all observed in each of them, and it is predicted that the observations will have a particular order. For example, a number of subjects might each be given three trials at the same task, and it is predicted that performance will improve from trial to trial. A test of the significance of the trend between conditions in this situation was developed by E. B. Page and is usually referred to as Page's trend test for ordered alternatives.


Spearman's rank correlation coefficients (Rs) for the vari- ables ...
src: www.researchgate.net


Correspondence analysis based on Spearman's rho

Classic correspondence analysis is a statistical method that gives a score to every value of two nominal variables. In this way the Pearson correlation coefficient between them is maximized.

There exists an equivalent of this method, called grade correspondence analysis, which maximizes Spearman's rho or Kendall's tau.


A tutorial on spearman's rank correlation coefficient and how to ...
src: i.ytimg.com


See also

  • Kendall tau rank correlation coefficient
  • Chebyshev's sum inequality, rearrangement inequality (These two articles may shed light on the mathematical properties of Spearman's ?.)
  • Distance correlation
  • Polychoric correlation

Spearman's Rank Correlation Coefficient - YouTube
src: i.ytimg.com


References


Spearman's rank correlation coefficients for correlations between ...
src: www.researchgate.net


Further reading

  • Corder, G.W. & Foreman, D.I. (2014). Nonparametric Statistics: A Step-by-Step Approach, Wiley. ISBN 978-1118840313.
  • Daniel, Wayne W. (1990). "Spearman rank correlation coefficient". Applied Nonparametric Statistics (2nd ed.). Boston: PWS-Kent. pp. 358-365. ISBN 0-534-91976-6.
  • Spearman C (1904). "The proof and measurement of association between two things". American Journal of Psychology. 15: 72-101. doi:10.2307/1412159.
  • Bonett DG, Wright, TA (2000). "Sample size requirements for Pearson, Kendall, and Spearman correlations". Psychometrika. 65: 23-28. doi:10.1007/bf02294183.CS1 maint: Multiple names: authors list (link)
  • Kendall MG (1970). Rank correlation methods (4th ed.). London: Griffin. ISBN 978-0-852-6419-96. OCLC 136868.
  • Hollander M, Wolfe DA (1973). Nonparametric statistical methods. New York: Wiley. ISBN 978-0-471-40635-8. OCLC 520735.
  • Caruso JC, Cliff N (1997). "Empirical size, coverage, and power of confidence intervals for Spearman's Rho". Educational and Psychological Measurement. 57: 637-654. doi:10.1177/0013164497057004009.

MS: Spearman's Rank Correlation Coefficient(Revision) - YouTube
src: i.ytimg.com


External links

  • Table of critical values of ? for significance with small samples
  • Spearman's Rank Correlation Coefficient - Excel Guide: sample data and formulae for Excel, developed by the Royal Geographical Society.

Source of article : Wikipedia