Sponsored Links
-->

Wednesday, July 25, 2018

Curve_survivor_comparison_of_a ...
src: www.assetinsights.net

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive or negative, or undefined.

The qualitative interpretation of the skew is complicated and unintuitive. Skew does not refer to the direction the curve appears to be leaning; in fact, the opposite is true. For a unimodal distribution, negative skew indicates that the tail on the left side of the probability density function is longer or fatter than the right side - it does not distinguish these two kinds of shape. Conversely, positive skew indicates that the tail on the right side is longer or fatter than the left side. In cases where one tail is long but the other tail is fat, skewness does not obey a simple rule. For example, a zero value means that the tails on both sides of the mean balance out overall; this is the case for a symmetric distribution, but is also true for an asymmetric distribution where the asymmetries even out, such as one tail being long but thin, and the other being short but fat. Further, in multimodal distributions and discrete distributions, skewness is also difficult to interpret. Importantly, the skewness does not determine the relationship of mean and median. In cases where it is necessary, data might be transformed to have a normal distribution.


Video Skewness



Introduction

Consider the two distributions in the figure just below. Within each graph, the values on the right side of the distribution taper differently from the values on the left side. These tapering sides are called tails, and they provide a visual means to determine which of the two kinds of skewness a distribution has:

  1. negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed, left-tailed, or skewed to the left, despite the fact that the curve itself appears to be skewed or leaning to the right; left instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a right-leaning curve.
  2. positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve.

Skewness in a data series may sometimes be observed not only graphically but by simple inspection of the values. For instance, consider the numeric sequence (49, 50, 51), whose values are evenly distributed around a central value of 50. We can transform this sequence into a negatively skewed distribution by adding a value far below the mean, e.g. (40, 49, 50, 51). Similarly, we can make the sequence positively skewed by adding a value far above the mean, e.g. (49, 50, 51, 60).


Maps Skewness



Relationship of mean and median

The skewness is not directly related to the relationship between the mean and median: a distribution with negative skew can have its mean greater than or less than the median, and likewise for positive skew.

In the older notion of nonparametric skew, defined as ( ? - ? ) / ? , {\displaystyle (\mu -\nu )/\sigma ,} where ? {\displaystyle \mu } is the mean, ? {\displaystyle \nu } is the median, and ? {\displaystyle \sigma } is the standard deviation, the skewness is defined in terms of this relationship: positive/right nonparametric skew means the mean is greater than (to the right of) the median, while negative/left nonparametric skew means the mean is less than (to the left of) the median. However, the modern definition of skewness and the traditional nonparametric definition do not in general have the same sign: while they agree for some families of distributions, they differ in general, and conflating them is misleading.

If the distribution is symmetric, then the mean is equal to the median, and the distribution has zero skewness. If the distribution is both symmetric and unimodal, then the mean = median = mode. This is the case of a coin toss or the series 1,2,3,4,... Note, however, that the converse is not true in general, i.e. zero skewness does not imply that the mean is equal to the median.

Paul T. von Hippel points out: "Many textbooks, teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal. Such distributions not only contradict the textbook relationship between mean, median, and skew, they also contradict the textbook interpretation of the median."


Normal Distributions, Standard Deviations, Modality, Skewness and ...
src: i.ytimg.com


Definition

Pearson's moment coefficient of skewness

The skewness of a random variable X is the third standardized moment ?1, defined as:

                                        ?                          1                                =          E                                [                                          (                                                                            X                      -                      ?                                        ?                                                  )                                            3                                      ]                    =                                                    ?                                  3                                                            ?                                  3                                                              =                                                    E                                                  [                                      (                    X                    -                    ?                                          )                                              3                                                                              ]                                                                                                               (                E                                                  [                                      (                    X                    -                    ?                                          )                                              2                                                                              ]                                                  )                                      3                                          /                                        2                                                                                =                                                    ?                                  3                                                            ?                                  2                                                  3                                      /                                    2                                                                        {\displaystyle \gamma _{1}=\operatorname {E} \left[\left({\frac {X-\mu }{\sigma }}\right)^{3}\right]={\frac {\mu _{3}}{\sigma ^{3}}}={\frac {\operatorname {E} \left[(X-\mu )^{3}\right]}{\ \ \ (\operatorname {E} \left[(X-\mu )^{2}\right])^{3/2}}}={\frac {\kappa _{3}}{\kappa _{2}^{3/2}}}}      

where ? is the mean, ? is the standard deviation, E is the expectation operator, ?3 is the third central moment, and ?t are the tth cumulants. It is sometimes referred to as Pearson's moment coefficient of skewness, or simply the moment coefficient of skewness, but should not be confused with Pearson's other skewness statistics (see below). The last equality expresses skewness in terms of the ratio of the third cumulant ?3 to the 1.5th power of the second cumulant ?2. This is analogous to the definition of kurtosis as the fourth cumulant normalized by the square of the second cumulant. The skewness is also sometimes denoted Skew[X].

Skewness can be expressed in terms of the non-central moment E[X3] by expanding the previous formula,

? 1 = E [ ( X - ? ? ) 3 ] = E [ X 3 ] - 3 ? E [ X 2 ] + 3 ? 2 E [ X ] - ? 3 ? 3 = E [ X 3 ] - 3 ? ( E [ X 2 ] - ? E [ X ] ) - ? 3 ? 3 = E [ X 3 ] - 3 ? ? 2 + 2 ? 3 ? 3 . {\displaystyle {\begin{aligned}\gamma _{1}&=\operatorname {E} \left[\left({\frac {X-\mu }{\sigma }}\right)^{3}\right]\\&={\frac {\operatorname {E} [X^{3}]-3\mu \operatorname {E} [X^{2}]+3\mu ^{2}\operatorname {E} [X]-\mu ^{3}}{\sigma ^{3}}}\\&={\frac {\operatorname {E} [X^{3}]-3\mu (\operatorname {E} [X^{2}]-\mu \operatorname {E} [X])-\mu ^{3}}{\sigma ^{3}}}\\&={\frac {\operatorname {E} [X^{3}]-3\mu \sigma ^{2}+2\mu ^{3}}{\sigma ^{3}}}.\end{aligned}}}

Examples

Skewness can be infinite, as when

Pr [ X > x ] = x - 2  for  x > 1 ,   Pr [ X < 1 ] = 0 {\displaystyle \Pr \left[X>x\right]=x^{-2}{\mbox{ for }}x>1,\ \Pr[X<1]=0}

where the third cumulants are infinite, or as when

Pr [ X < x ] = ( 1 - x ) - 3 / 2  for negative  x  and  Pr [ X > x ] = ( 1 + x ) - 3 / 2  for positive  x . {\displaystyle \Pr[X<x]=(1-x)^{-3}/2{\mbox{ for negative }}x{\mbox{ and }}\Pr[X>x]=(1+x)^{-3}/2{\mbox{ for positive }}x.}

where the third cumulant is undefined.

Properties

Starting from a standard cumulant expansion around a normal distribution, one can show that

skewness = 6 (mean - median)/standard deviation (1 + kurtosis/8) + O (skewness2).

If Y is the sum of n independent and identically distributed random variables, all with the distribution of X, then the third cumulant of Y is n times that of X and the second cumulant of Y is n times that of X, so Skew [ Y ] = Skew [ X ] / n {\displaystyle {\mbox{Skew}}[Y]={\mbox{Skew}}[X]/{\sqrt {n}}} . This shows that the skewness of the sum is smaller, as it approaches a Gaussian distribution in accordance with the central limit theorem. Note that the assumption that the variables be independent for the above formula is very important because it is possible even for the sum of two Gaussian variables to have a skewed distribution (see this example).

Sample skewness

For a sample of n values, a natural method of moments estimator of the population skewness is

b 1 = m 3 s 3 = 1 n ? i = 1 n ( x i - x ¯ ) 3 1 n - 1 ? i = 1 n ( x i - x ¯ ) 2 3 = 1 n ? i = 1 n ( x i - x ¯ ) 3 [ 1 n - 1 ? i = 1 n ( x i - x ¯ ) 2 ] 3 / 2   , {\displaystyle b_{1}={\frac {m_{3}}{s^{3}}}={\frac {{\tfrac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{3}}{{\sqrt {{\tfrac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}}}^{\,3}}}={\frac {{\tfrac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{3}}{\left[{\tfrac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}\right]^{3/2}}}\ ,}


where x ¯ {\displaystyle {\overline {x}}} is the sample mean, s is the sample standard deviation, and the numerator m3 is the sample third central moment.

Another common definition of the sample skewness is

G 1 = k 3 k 2 3 / 2 = n 2 ( n - 1 ) ( n - 2 ) m 3 s 3 = n ( n - 1 ) n - 2 m 3 m 2 3 2 = n ( n - 1 ) n - 2 [ 1 n ? i = 1 n ( x i - x ¯ ) 3 ( 1 n ? i = 1 n ( x i - x ¯ ) 2 ) 3 2 ] , {\displaystyle {\begin{aligned}G_{1}&={\frac {k_{3}}{k_{2}^{3/2}}}={\frac {n^{2}}{(n-1)(n-2)}}\;{\frac {m_{3}}{s^{3}}}\\&={\frac {\sqrt {n\left(n-1\right)}}{n-2}}{\frac {{m}_{3}}{{{m}_{2}}^{\frac {3}{2}}}}={\frac {\sqrt {n\left(n-1\right)}}{n-2}}\left[{\frac {{\frac {1}{n}}\sum \limits _{i=1}^{n}{{\left({{x}_{i}}-{\bar {x}}\right)}^{3}}}{{\left({\frac {1}{n}}\sum \limits _{i=1}^{n}{{\left({{x}_{i}}-{\bar {x}}\right)}^{2}}\right)}^{\frac {3}{2}}}}\right],\\\end{aligned}}}



where k 3 {\displaystyle k_{3}} is the unique symmetric unbiased estimator of the third cumulant and k 2 = s 2 {\displaystyle k_{2}=s^{2}} is the symmetric unbiased estimator of the second cumulant (i.e. the variance).

In general, the ratios b 1 {\displaystyle b_{1}} and G 1 {\displaystyle G_{1}} are both biased estimators of the population skewness ? 1 {\displaystyle \gamma _{1}} ; their expected values can even have the opposite sign from the true skewness. (For instance, a mixed distribution consisting of very thin Gaussians centred at -99, 0.5, and 2 with weights 0.01, 0.66, and 0.33 has a skewness of about -9.77, but in a sample of 3, G 1 {\displaystyle G_{1}} has an expected value of about 0.32, since usually all three samples are in the positive-valued part of the distribution, which is skewed the other way.) Nevertheless, b 1 {\displaystyle b_{1}} and G 1 {\displaystyle G_{1}} each have obviously the correct expected value of zero for any symmetric distribution with a finite third moment, including a normal distribution.

Under the assumption that the underlying random variable X {\displaystyle X} is normally distributed, it can be shown that n b 1 -> d N ( 0 , 6 ) {\displaystyle {\sqrt {n}}b_{1}{\xrightarrow {d}}N(0,6)} . The variance of the skewness of a random sample of size n from a normal distribution is

var ( G 1 ) = 6 n ( n - 1 ) ( n - 2 ) ( n + 1 ) ( n + 3 ) . {\displaystyle \operatorname {var} (G_{1})={\frac {6n(n-1)}{(n-2)(n+1)(n+3)}}.}

An approximate alternative is 6/n, but this is inaccurate for small samples.

In normal samples, b 1 {\displaystyle b_{1}} has the smaller variance of the two estimators, with

var ( b 1 ) < var ( m 3 m 2 3 / 2 ) < var ( G 1 ) , {\displaystyle \operatorname {var} (b_{1})<\operatorname {var} \left({\frac {m_{3}}{m_{2}^{3/2}}}\right)<\operatorname {var} (G_{1}),}

where m2 in the denominator is the (biased) sample second central moment.

The adjusted Fisher-Pearson standardized moment coefficient G 1 {\displaystyle G_{1}} is the version found in Excel and several statistical packages including Minitab, SAS and SPSS.


Monthly Stock Returns: One Fat Tail and a Dash of Skewness? -
src: alphaarchitect.com


Applications

Aside from indicating which direction and a relative magnitude of how far a distribution deviates from normal, skewness itself offers few analytical insights; aside from calculating it for its own sake, it is rarely used as a variable in further calculations.

Many models assume normal distribution; i.e., data are symmetric about the mean. The normal distribution has a skewness of zero. But in reality, data points may not be perfectly symmetric. So, an understanding of the skewness of the dataset indicates whether deviations from the mean are going to be positive or negative.

D'Agostino's K-squared test is a goodness-of-fit normality test based on sample skewness and sample kurtosis.


Skewness - SPSS (part 2) - YouTube
src: i.ytimg.com


Other measures of skewness

Other measures of skewness have been used, including simpler calculations suggested by Karl Pearson (not to be confused with Pearson's moment coefficient of skewness, see above). These other measures are:

Pearson's first skewness coefficient (mode skewness)

The Pearson mode skewness, or first skewness coefficient, is defined as

mean - mode/standard deviation.

Pearson's second skewness coefficient (median skewness)

The Pearson median skewness, or second skewness coefficient, is defined as

3 (mean - median)/standard deviation.

The latter is a simple multiple of the nonparametric skew.

Quartile-based measures

Bowley's measure of skewness (from 1901), also called Yule's coefficient (from 1912) is defined as:

B 1 = Q 3 + Q 1 - 2 Q 2 Q 3 - Q 1 {\displaystyle B_{1}={\frac {{{Q}_{3}}+{{Q}_{1}}-2{{Q}_{2}}}{{{Q}_{3}}-{{Q}_{1}}}}} .

When writing it as Q 3 + Q 1 2 - Q 2 Q 3 - Q 1 2 {\displaystyle {\frac {{\frac {{{Q}_{3}}+{{Q}_{1}}}{2}}-{{Q}_{2}}}{\frac {{{Q}_{3}}-{{Q}_{1}}}{2}}}} , it is easier to see that the numerator is the average of the upper and lower quartiles (a measure of location) minus the median while the denominator is (Q3-Q1)/2 which (for symmetric distributions) is the MAD measure of dispersion.

Other names for this measure are Galton's measure of skewness, the Yule-Kendall index and the quartile skewness,

A more general formulation of a skewness function was described by Groeneveld, R. A. and Meeden, G. (1984):

? ( u ) = F - 1 ( u ) + F - 1 ( 1 - u ) - 2 F - 1 ( 1 / 2 ) F - 1 ( u ) - F - 1 ( 1 - u ) {\displaystyle \gamma (u)={\frac {F^{-1}(u)+F^{-1}(1-u)-2F^{-1}(1/2)}{F^{-1}(u)-F^{-1}(1-u)}}}

where F is the cumulative distribution function. This leads to a corresponding overall measure of skewness defined as the supremum of this over the range 1/2 <= u < 1. Another measure can be obtained by integrating the numerator and denominator of this expression. The function ?(u) satisfies -1 <= ?(u) <= 1 and is well defined without requiring the existence of any moments of the distribution.

Bowley's measure of skewness is ?(u) evaluated at u = 3/4. Kelley's measure of skewness uses u = 0.1.

Groeneveld & Meeden's coefficient

Groeneveld & Meeden have suggested, as an alternative measure of skewness,

B 3 = s k e w ( X ) = ( ? - ? ) E ( | X - ? | ) , {\displaystyle B_{3}=\mathrm {skew} (X)={\frac {(\mu -\nu )}{E(|X-\nu |)}},}

where ? is the mean, ? is the median, |...| is the absolute value, and E() is the expectation operator. This is closely related in form to Pearson's second skewness coefficient.

L-moments

Use of L-moments in place of moments provides a measure of skewness known as the L-skewness.

Distance skewness

A value of skewness equal to zero does not imply that the probability distribution is symmetric. Thus there is a need for another measure of asymmetry that has this property: such a measure was introduced in 2000. It is called distance skewness and denoted by dSkew. If X is a random variable taking values in the d-dimensional Euclidean space, X has finite expectation, X' is an independent identically distributed copy of X, and ? ? ? {\displaystyle \|\cdot \|} denotes the norm in the Euclidean space, then a simple measure of asymmetry with respect to location parameter ? is

dSkew ( X ) := 1 - E ? X - X ? ? E ? X + X ? - 2 ? ?  if  Pr ( X = ? ) ? 1 {\displaystyle \operatorname {dSkew} (X):=1-{\frac {\operatorname {E} \|X-X'\|}{\operatorname {E} \|X+X'-2\theta \|}}{\text{ if }}\Pr(X=\theta )\neq 1}

and dSkew(X) := 0 for X = ? (with probability 1). Distance skewness is always between 0 and 1, equals 0 if and only if X is diagonally symmetric with respect to ? (X and ? -X have the same probability distribution) and equals 1 if and only if X is a constant c ( c ? ? {\displaystyle c\neq \theta } ) with probability one. Thus there is a simple consistent statistical test of diagonal symmetry based on the sample distance skewness:

dSkew n ( X ) := 1 - ? i , j ? x i - x j ? ? i , j ? x i + x j - 2 ? ? . {\displaystyle \operatorname {dSkew} _{n}(X):=1-{\frac {\sum _{i,j}\|x_{i}-x_{j}\|}{\sum _{i,j}\|x_{i}+x_{j}-2\theta \|}}.}

Medcouple

The medcouple is a scale-invariant robust measure of skewness, with a breakdown point of 25%. It is the median of the values of the kernel function

h ( x i , x j ) = ( x i - x m ) - ( x m - x j ) x i - x j {\displaystyle h(x_{i},x_{j})={\frac {(x_{i}-x_{m})-(x_{m}-x_{j})}{x_{i}-x_{j}}}}

taken over all couples ( x i , x j ) {\displaystyle (x_{i},x_{j})} such that x i >= x m >= x j {\displaystyle x_{i}\geq x_{m}\geq x_{j}} , where x m {\displaystyle x_{m}} is the median of the sample { x 1 , x 2 , ... , x n } {\displaystyle \{x_{1},x_{2},\ldots ,x_{n}\}} . It can be seen as the median of all possible quantile skewness measures.


Figure 1: Distribution of the skewness values for the 125 PIRLS ...
src: www.researchgate.net


See also

  • Bragg peak
  • Coskewness
  • Shape parameters
  • Skew normal distribution
  • Skewness risk

Skewness and kurtosis -- Example 1 - YouTube
src: i.ytimg.com


Notes

  • Premaratne, G., Bera, A. K. (2001).

Adjusting the Tests for Skewness and Kurtosis for Distributional Misspecifications. Working Paper Number 01-0116, University of Illinois. Forthcoming in Comm in Statistics, Simulation and Computation.2016 1-15

  • Premaratne, G., Bera, A. K. (2000). Modeling Asymmetry and Excess Kurtosis in Stock Return Data. Office of Research Working Paper Number 00-0123, University of Illinois

Real life examples of distributions with negative skewness - Cross ...
src: i.stack.imgur.com


References

  • Johnson, NL, Kotz, S, Balakrishnan N (1994) Continuous Univariate Distributions, Vol 1, 2nd Edition Wiley ISBN 0-471-58495-9
  • MacGillivray, HL (1992). "Shape properties of the g- and h- and Johnson families". Comm. Statistics -- Theory and Methods. 21: 1244-1250. 
  • Skewness Measures for the Weibull Distribution

Measures of Dispersion: Skewness and Kurtosis - YouTube
src: i.ytimg.com


External links

  • Hazewinkel, Michiel, ed. (2001) [1994], "Asymmetry coefficient", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 
  • An Asymmetry Coefficient for Multivariate Distributions by Michel Petitjean
  • On More Robust Estimation of Skewness and Kurtosis Comparison of skew estimators by Kim and White.
  • Closed-skew Distributions -- Simulation, Inversion and Parameter Estimation

Source of article : Wikipedia