Kurtosis is a statistical term primarily used in the field of probability and statistics. It refers to a measure that quantifies the shape or peakedness of a probability distribution's tails in relation to its central tendency. In simpler terms, it examines the extent to which the data in a dataset deviates from a normal distribution.
More precisely, kurtosis measures the outlier behavior of a distribution, indicating whether the data has heavy or light tails compared to a normal distribution. It assesses the concentration of values in the central part of the distribution, influencing the presence of extreme values or outliers.
There are three common types of kurtosis: mesokurtic, platykurtic, and leptokurtic. A mesokurtic distribution displays a standard level of kurtosis and closely resembles a normal distribution. On the other hand, a platykurtic distribution has shorter and flatter tails than the normal distribution, indicating fewer outliers. Conversely, a leptokurtic distribution displays higher and more pronounced tails, suggesting the presence of more extreme values or outliers compared to a normal distribution.
Kurtosis is typically measured through statistical calculations such as the excess kurtosis, which extracts the kurtosis value of a distribution with reference to a normal distribution. This value can be positive or negative, representing the degree of deviation from normality, with zero indicating a perfectly normal distribution.
The word "kurtosis" originated from the Greek word "kyrtos", meaning "curved" or "arched". It was first used in statistics in the early 20th century to describe the shape of a probability distribution curve. The term was introduced by the British statistician Karl Pearson, who sought to quantify the degree to which a distribution deviates from a normal distribution. The concept of kurtosis measures the "peakedness" or "tailedness" of a distribution when compared to the normal distribution curve.