Statistical distributions

Math Topics A - Z listing

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Statistical distributions Topics

Sort by:

Lyapunov condition

The Lyapunov condition, sometimes known as Lyapunov's central limit theorem, states that if the th moment (with ) exists for a statistical distribution of independent random variates (which need not necessarily be from same distribution), the means and variances are finite, and(1)then if(2)where(3)the central limit theorem holds.

Lindeberg condition

A sufficient condition on the Lindeberg-Feller central limit theorem. Given random variates , , ..., let , the variance of be finite, and variance of the distribution consisting of a sum of s(1)be(2)In the terminology of Zabell (1995), let(3)where denotes the expectation value of restricted to outcomes , then the Lindeberg condition is(4)for all (Zabell 1995).In the terminology of Feller (1971), the Lindeberg condition assumed that for each ,(5)or equivalently(6)Then the distribution(7)tends to the normal distribution with zero expectation and unit variance (Feller 1971, p. 256). The Lindeberg condition (5) guarantees that the individual variances are small compared to their sum in the sense that for given for all sufficiently large , for , ..., (Feller 1971, p. 256).

Joint distribution function

A joint distribution function is a distribution function in two variables defined by(1)(2)(3)so that the joint probability function satisfies(4)(5)(6)(7)(8)Two random variables and are independent iff(9)for all and and(10)A multiple distribution function is of the form(11)

Distribution function

The distribution function , also called the cumulative distribution function (CDF) or cumulative frequency function, describes the probability that a variate takes on a value less than or equal to a number . The distribution function is sometimes also denoted (Evans et al. 2000, p. 6).The distribution function is therefore related to a continuous probability density function by(1)(2)so (when it exists) is simply the derivative of the distribution function(3)Similarly, the distribution function is related to a discrete probability by(4)(5)There exist distributions that are neither continuous nor discrete.A joint distribution function can bedefined if outcomes are dependent on two parameters:(6)(7)(8)Similarly, a multivariate distribution function can be defined if outcomes depend on parameters:(9)The probability content of a closed region can be found much more efficiently than by direct integration of the probability..

Mean distribution

For an infinite population with mean , variance , skewness , and kurtosis excess , the corresponding quantities for the distribution of means are(1)(2)(3)(4)For a population of (Kenney and Keeping 1962, p. 181),(5)(6)

Discrete distribution

A statistical distribution whose variables can take on only discrete values. Abramowitz and Stegun (1972, p. 929) give a table of the parameters of most common discrete distributions.A discrete distribution with probability function defined over , 2, ..., has distribution functionand population mean

Multinomial distribution

Let a set of random variates , , ..., have a probability function(1)where are nonnegative integers such that(2)and are constants with and(3)Then the joint distribution of , ..., is a multinomial distribution and is given by the corresponding coefficient of the multinomial series(4)In the words, if , , ..., are mutually exclusive events with , ..., . Then the probability that occurs times, ..., occurs times is given by(5)(Papoulis 1984, p. 75).The mean and variance of are(6)(7)The covariance of and is(8)

Normal ratio distribution

The ratio of independent normally distributed variates with zero mean is distributed with a Cauchy distribution. This can be seen as follows. Let and both have mean 0 and standard deviations of and , respectively, then the joint probability density function is the bivariate normal distribution with ,(1)From ratio distribution, the distribution of is(2)(3)(4)But(5)so(6)(7)(8)which is a Cauchy distribution.A more direct derivative proceeds from integration of(9)(10)where is a delta function.

Normal product distribution

The distribution of a product of two normally distributed variates and with zero means and variances and is given by(1)(2)where is a delta function and is a modified Bessel function of the second kind. This distribution is plotted above in red.The analogous expression for a product of three normal variates can be given in termsof Meijer G-functions as(3)plotted above in blue.

Normal distribution function

A normalized form of the cumulative normal distribution function giving the probability that a variate assumes a value in the range ,(1)It is related to the probability integral(2)by(3)Let so . Then(4)Here, erf is a function sometimes called the error function. The probability that a normal variate assumes a value in the range is therefore given by(5)Neither nor erf can be expressed in terms of finite additions, subtractions, multiplications, and root extractions, and so must be either computed numerically or otherwise approximated.Note that a function different from is sometimes defined as "the" normal distribution function(6)(7)(8)(9)(Feller 1968; Beyer 1987, p. 551), although this function is less widely encountered than the usual . The notation is due to Feller (1971).The value of for which falls within the interval with a given probability is a related quantity called the confidence interval.For small values..

Von mises distribution

A continuous distribution defined on the range with probability density function(1)where is a modified Bessel function of the first kind of order 0, and distribution function(2)which cannot be done in closed form. Here, is the mean direction and is a concentration parameter. The von Mises distribution is the circular analog of the normal distribution on a line.The mean is(3)and the circular variance is(4)

Normal difference distribution

Amazingly, the distribution of a difference of two normally distributed variates and with means and variances and , respectively, is given by(1)(2)where is a delta function, which is another normal distribution having mean(3)and variance(4)

Uniform sum distribution

The distribution for the sum of uniform variates on the interval can be found directly as(1)where is a delta function.A more elegant approach uses the characteristicfunction to obtain(2)where the Fourier parameters are taken as . The first few values of are then given by(3)(4)(5)(6)illustrated above.Interestingly, the expected number of picks of a number from a uniform distribution on so that the sum exceeds 1 is e (Derbyshire 2004, pp. 366-367). This can be demonstrated by noting that the probability of the sum of variates being greater than 1 while the sum of variates being less than 1 is(7)(8)(9)The values for , 2, ... are 0, 1/2, 1/3, 1/8, 1/30, 1/144, 1/840, 1/5760, 1/45360, ... (OEIS A001048). The expected number of picks needed to first exceed 1 is then simply(10)It is more complicated to compute the expected number of picks that is needed for their sum to first exceed 2. In this case,(11)(12)The first few terms are therefore 0, 0, 1/6,..

Uniform ratio distribution

The ratio of uniform variates and on the interval can be found directly as(1)(2)where is a delta function and is the Heaviside step function.The distribution is normalized, but its mean and moments diverge.

Uniform product distribution

The distribution of the product of uniform variates on the interval can be found directly as(1)(2)where is a delta function. The distributions are plotted above for (red), (yellow), and so on.

Uniform difference distribution

The difference of two uniform variates on the interval can be found as(1)(2)where is a delta function and is the Heaviside step function.

Error function distribution

A normal distribution with mean0,(1)The characteristic function is(2)The mean, variance, skewness,and kurtosis excess are(3)(4)(5)(6)The cumulants are(7)(8)(9)for .

Erlang distribution

Given a Poisson distribution with a rate of change , the distribution function giving the waiting times until the th Poisson event is(1)(2)for , where is a complete gamma function, and an incomplete gamma function. With explicitly an integer, this distribution is known as the Erlang distribution, and has probability function(3)It is closely related to the gamma distribution, which is obtained by letting (not necessarily an integer) and defining . When , it simplifies to the exponential distribution.Evans et al. (2000, p. 71) write the distribution using the variables and .

Doob's theorem

A theorem proved by Doob (1942) which states that any random process which is both normal and Markov has the following forms for its correlation function , spectral density , and probability densities and :(1)(2)(3)(4)where is the mean, the standard deviation, and the relaxation time.

Difference of successes

If and are the observed proportions from standard normally distributed samples with proportion of success , then the probability that(1)will be as great as observed is(2)where(3)(4)(5)Here, is the unbiased estimator. The skewness and kurtosis excess of this distribution are(6)(7)

Standard normal distribution

A standard normal distribution is a normal distribution with zero mean () and unit variance (), given by the probability density function and distribution function(1)(2)over the domain .It has mean, variance, skewness,and kurtosis excess given by(3)(4)(5)(6)The first quartile of the standard normal distribution occurs when , which is(7)(8)(OEIS A092678; Kenney and Keeping 1962, p. 134), where is the inverse erf function. The absolute value of this is known as the probable error.

Logarithmic distribution

The logarithmic distribution is a continuous distribution for a variate with probability function(1)and distribution function(2)It therefore applies to a variable distributed as , and has appropriate normalization.Note that the log-series distribution is sometimes also known as the logarithmic distribution, and the distribution arising in Benford's law is also "a" logarithmic distribution.The raw moments are given by(3)The mean is therefore(4)The variance, skewness,and kurtosis excess are slightly complicated expressions.

S distribution

The distribution is defined in terms of its distribution function as the solution to the initial value problemwhere (Savageau 1982, Aksenov and Savageau 2001). It has four free parameters: , , , and .The distribution is capable of approximating many central and noncentral unimodal univariate distributions rather well (Voit 1991), but also includes the exponential, logistic, uniform and linear distributions as special cases. The S distribution derives its name from the fact that it is based on the theory of S-systems (Savageau 1976, Voit 1991, Aksenov and Savageau 2001).

Continuity correction

A correction to a discrete binomial distributionto approximate a continuous distribution.whereis a continuous variate with a normal distribution and is a variate of a binomial distribution.

Probable error

The probability that a random sample from an infinite normally distributed universe will have a mean within a distance of the mean of the universe is(1)where is the normal distribution function and is the observed value of(2)The probable error is then defined as the value of such that , i.e.,(3)which is given by(4)(5)(OEIS A092678; Kenney and Keeping 1962, p. 134). Here, is the inverse erf function. The probability of a deviation from the true population value at least as great as the probable error is therefore 1/2.

Price's theorem

Consider a bivariate normal distribution in variables and with covariance(1)and an arbitrary function . Then the expected value of the random variable (2)satisfies(3)

Gibrat's distribution

Gibrat's distribution is a continuous distribution in which the logarithm of a variable has a normal distribution,(1)defined over the interval . It is a special case of the log normal distribution(2)with and , and so has distribution function(3)The mean, variance, skewness,and kurtosis excess are then given by(4)(5)(6)(7)

Pearson type iii distribution

A skewed distribution which is similar to the binomial distribution when (Abramowitz and Stegun 1972, p. 930).(1)for where(2)(3) is the gamma function, and is a standardized variate. Another form is(4)For this distribution, the characteristicfunction is(5)and the mean, variance, skewness, and kurtosis excess are(6)(7)(8)(9)

Pearson system

A system of equation types obtained by generalizing the differential equation forthe normal distribution(1)which has solution(2)to(3)which has solution(4)Let , be the roots of . Then the possible types of curves are 0. , . E.g., normal distribution. I. , . E.g., beta distribution. II. , , where . III. , , where . E.g., gamma distribution. This case is intermediate to cases I and VI. IV. , . V. , where . Intermediate to cases IV and VI. VI. , where is the larger root. E.g., beta prime distribution. VII. , , . E.g., Student's t-distribution. Classes IX-XII are discussed in Pearson (1916). See also Craig (in Kenney and Keeping 1951).If a Pearson curve possesses a mode, it will be at . Let at and , where these may be or . If also vanishes at , , then the th moment and th moments exist.(5)giving(6)(7)Now define the raw th moment by(8)so combining (7) with (8) gives(9)For ,(10)so(11)and for ,(12)so(13)Combining (11), (13), and the definitions(14)(15)obtained..

Gaussian joint variable theorem

The Gaussian joint variable theorem, also called the multivariate theorem, states that given an even number of variates from a normal distribution with means all 0,(1)etc. Given an odd number of variates,(2)(3)etc.

Beta prime distribution

A distribution with probability functionwhere is a beta function. The mode of a variate distributed as isIf is a variate, then is a variate. If is a variate, then and are and variates. If and are and variates, then is a variate. If and are variates, then is a variate.

Normal sum distribution

Amazingly, the distribution of a sum of two normally distributed independent variates and with means and variances and , respectively is another normal distribution(1)which has mean(2)and variance(3)By induction, analogous results hold for the sum of normally distributed variates.An alternate derivation proceeds by noting that(4)(5)where is the characteristic function and is the inverse Fourier transform, taken with parameters .More generally, if is normally distributed with mean and variance , then a linear function of ,(6)is also normally distributed. The new distribution has mean and variance , as can be derived using the moment-generating function(7)(8)(9)(10)(11)which is of the standard form with(12)(13)For a weighted sum of independent variables(14)the expectation is given by(15)(16)(17)(18)(19)Setting this equal to(20)gives(21)(22)Therefore, the mean and variance of the weighted sums of random variables..

Laplace distribution

The Laplace distribution, also called the double exponential distribution, is the distribution of differences between two independent variates with identical exponential distributions (Abramowitz and Stegun 1972, p. 930). It had probability density function and cumulative distribution functions given by(1)(2)It is implemented in the Wolfram Language as LaplaceDistribution[mu, beta].The moments about the mean are related to the moments about 0 by(3)where is a binomial coefficient, so(4)(5)where is the floor function and is the gamma function.The moments can also be computed using the characteristicfunction,(6)Using the Fourier transform ofthe exponential function(7)gives(8)(Abramowitz and Stegun 1972, p. 930). The momentsare therefore(9)The mean, variance, skewness,and kurtosis excess are(10)(11)(12)(13)..

Discrete uniform distribution

The discrete uniform distribution is also known as the "equally likely outcomes" distribution. Letting a set have elements, each of them having the same probability, then(1)(2)(3)(4)so using gives(5)Restricting the set to the set of positive integers 1, 2, ..., , the probability distribution function and cumulative distributions function for this discrete uniform distribution are therefore(6)(7)for , ..., .The discrete uniform distribution is implemented in the WolframLanguage as DiscreteUniformDistribution[n].Its moment-generating function is(8)(9)(10)(11)The moments about 0 are(12)so(13)(14)(15)(16)and the moments about the meanare(17)(18)(19)The mean, variance, skewness,and kurtosis excess are(20)(21)(22)(23)The mean deviation for a uniform distribution on elements is given by(24)To do the sum, consider separately the cases of odd and even. For odd,(25)(26)(27)(28)Similarly, for even,(29)(30)(31)(32)The..

Hypergeometric distribution

Let there be ways for a "good" selection and ways for a "bad" selection out of a total of possibilities. Take samples and let equal 1 if selection is successful and 0 if it is not. Let be the total number of successful selections,(1)The probability of successful selections is then(2)(3)(4)The hypergeometric distribution is implemented in the Wolfram Language as HypergeometricDistribution[N, n, m+n].The problem of finding the probability of such a picking problem is sometimes called the "urn problem," since it asks for the probability that out of balls drawn are "good" from an urn that contains "good" balls and "bad" balls. It therefore also describes the probability of obtaining exactly correct balls in a pick- lottery from a reservoir of balls (of which are "good" and are "bad"). For example, for and , the probabilities of obtaining correct balls..

Binomial distribution

The binomial distribution gives the discrete probability distribution of obtaining exactly successes out of Bernoulli trials (where the result of each Bernoulli trial is true with probability and false with probability ). The binomial distribution is therefore given by(1)(2)where is a binomial coefficient. The above plot shows the distribution of successes out of trials with .The binomial distribution is implemented in the Wolfram Language as BinomialDistribution[n, p].The probability of obtaining more successes than the observed in a binomial distribution is(3)where(4) is the beta function, and is the incomplete beta function.The characteristic function for the binomialdistribution is(5)(Papoulis 1984, p. 154). The moment-generating function for the distribution is(6)(7)(8)(9)(10)(11)The mean is(12)(13)(14)The moments about 0 are(15)(16)(17)(18)so the moments about the meanare(19)(20)(21)The skewness..

Poisson distribution

Given a Poisson process, the probability of obtaining exactly successes in trials is given by the limit of a binomial distribution(1)Viewing the distribution as a function of the expected number of successes(2)instead of the sample size for fixed , equation (2) then becomes(3)Letting the sample size become large, the distribution then approaches(4)(5)(6)(7)(8)which is known as the Poisson distribution (Papoulis 1984, pp. 101 and 554; Pfeiffer and Schum 1973, p. 200). Note that the sample size has completely dropped out of the probability function, which has the same functional form for all values of .The Poisson distribution is implemented in the WolframLanguage as PoissonDistribution[mu].As expected, the Poisson distribution is normalized so that the sum of probabilities equals 1, since(9)The ratio of probabilities is given by(10)The Poisson distribution reaches a maximum when(11)where is the Euler-Mascheroni..

Beta distribution

A general type of statistical distribution which is related to the gamma distribution. Beta distributions have two free parameters, which are labeled according to one of two notational conventions. The usual definition calls these and , and the other uses and (Beyer 1987, p. 534). The beta distribution is used as a prior distribution for binomial proportions in Bayesian analysis (Evans et al. 2000, p. 34). The above plots are for various values of with and ranging from 0.25 to 3.00.The domain is , and the probability function and distribution function are given by(1)(2)(3)where is the beta function, is the regularized beta function, and . The beta distribution is implemented in the Wolfram Language as BetaDistribution[alpha, beta].The distribution is normalized since(4)The characteristic function is(5)(6)where is a confluent hypergeometric function of the first kind.The raw moments are given by(7)(8)(Papoulis 1984,..

Beta binomial distribution

A variable with a beta binomial distribution is distributed as a binomial distribution with parameter , where is distribution with a beta distribution with parameters and . For trials, it has probability density function(1)where is a beta function and is a binomial coefficient, and distribution function(2)where is a gamma function and(3)is a generalized hypergeometricfunction.It is implemented as BetaBinomialDistribution[alpha,beta, n].The first few raw moments are(4)(5)(6)giving the mean and varianceas(7)(8)

Le cam's inequality

Let be the sum of random variates with a Bernoulli distribution with . Thenwhere

Fisher's theorem

Let be a sum of squares of independent normal standardized variates , and suppose where is a quadratic form in the , distributed as chi-squared with degrees of freedom. Then is distributed as with degrees of freedom and is independent of . The converse of this theorem is known as Cochran's theorem.

Cramér's theorem

If and are independent variates and is a normal distribution, then both and must have normal distributions. This was proved by Cramér in 1936.

Central limit theorem

Let be a set of independent random variates and each have an arbitrary probability distribution with mean and a finite variance . Then the normal form variate(1)has a limiting cumulative distribution function which approaches a normaldistribution.Under additional conditions on the distribution of the addend, the probability density itself is also normal (Feller 1971) with mean and variance . If conversion to normal form is not performed, then the variate(2)is normally distributed with and .Kallenberg (1997) gives a six-line proof of the central limit theorem. For an elementary, but slightly more cumbersome proof of the central limit theorem, consider the inverse Fourier transform of .(3)(4)(5)(6)Now write(7)so we have(8)(9)(10)(11)(12)(13)(14)(15)(16)Now expand(17)so(18)(19)(20)since(21)(22)Taking the Fourier transform,(23)(24)This is of the form(25)where and . But this is a Fourier transform of a Gaussian function,..

Survival function

The survival function describes the probability that a variate takes on a value greater than a number (Evans et al. 2000, p. 6). The survival function is therefore related to a continuous probability density function by(1)so . Similarly, the survival function is related to a discrete probability by(2)The survival function and distribution function are related by(3)since probability functions are normalized.

Statistical distribution

The distribution of a variable is a description of the relative numbers of times each possible outcome will occur in a number of trials. The function describing the probability that a given value will occur is called the probability density function (abbreviated PDF), and the function describing the cumulative probability that a given value or any value smaller than it will occur is called the distribution function (or cumulative distribution function, abbreviated CDF).Formally, a distribution can be defined as a normalized measure, and the distribution of a random variable is the measure on defined by settingwhere is a probability space, is a measurable space, and a measure on with . If the measure is a Radon measure (which is usually the case), then the statistical distribution is a generalized function in the sense of a generalized function...

Stable distribution

Stable distributions are a class of probability distributions allowing skewness and heavy tails (Rimmer and Nolan 2005). They are described by an index of stability (also known as a characteristic exponent) , and skewness parameter , a scale parameter , and a location parameter . Two possible parametrizations include(1)(2)(Rimmer and Nolan 2005). is most convenient for numerical computations, whereas is commonly used in economics.

Sklar's theorem

Let be a two-dimensional distribution function with marginal distribution functions and . Then there exists a copula such thatConversely, for any univariate distribution functions and and any copula , the function is a two-dimensional distribution function with marginals and . Furthermore, if and are continuous, then is unique.

Rényi's parking constants

Given the closed interval with , let one-dimensional "cars" of unit length be parked randomly on the interval. The mean number of cars which can fit (without overlapping!) satisfies(1)The mean density of the cars for large is(2)(3)(4)(OEIS A050996). While the inner integral canbe done analytically,(5)(6)where is the Euler-Mascheroni constant and is the incomplete gamma function, it is not known how to do the outer one(7)(8)(9)where is the exponential integral. The slowly converging series expansion for the integrand is given by(10)(OEIS A050994 and A050995).In addition,(11)for all (Rényi 1958), which was strengthened by Dvoretzky and Robbins (1964) to(12)Dvoretzky and Robbins (1964) also proved that(13)Let be the variance of the number of cars, then Dvoretzky and Robbins (1964) and Mannion (1964) showed that(14)(15)(16)(OEIS A086245), where(17)(18)and the numerical value is due to Blaisdell and Solomon..

Ratio distribution

Given two distributions and with joint probability density function , let be the ratio distribution. Then the distribution function of is(1)(2)(3)The probability function is then(4)(5)(6)For variates with standard normal distributions,the ratio distribution is a Cauchy distribution.For a uniform ratio distribution(7)(8)

Memoryless

A variable is memoryless with respect to if, for all with ,(1)Equivalently,(2)(3)The exponential distribution satisfies(4)(5)and therefore(6)(7)(8)is the only memoryless random distribution.If and are integers, then the geometric distribution is memoryless. However, since there are two types of geometric distribution (one starting at 0 and the other at 1), two types of definition for memoryless are needed in the integer case. If the definition is as above,(9)then the geometric distribution that startsat 1 is memoryless. If the definition becomes(10)then the geometric distribution that startsat 0 is memoryless. Note that these two cases are equivalent in the continuous case.A useful consequence of the memoryless property is(11)where indicates an expectation value.

Weak law of large numbers

The weak law of large numbers (cf. the strong law of large numbers) is a result in probability theory also known as Bernoulli's theorem. Let , ..., be a sequence of independent and identically distributed random variables, each having a mean and standard deviation . Define a new variable(1)Then, as , the sample mean equals the population mean of each variable.(2)(3)(4)(5)In addition,(6)(7)(8)(9)Therefore, by the Chebyshev inequality, for all ,(10)As , it then follows that(11)(Khinchin 1929). Stated another way, the probability that the average for an arbitrary positive quantity approaches 1 as (Feller 1968, pp. 228-229).

Planck's radiation function

Planck's's radiation function is the function(1)which is normalized so that(2)However, the function is sometimes also defined without the numerical normalization factor of (e.g., Abramowitz and Stegun 1972, p. 999).The first and second raw moments are(3)(4)where is Apéry's constant, but higher order raw moments do not exist since the corresponding integrals do not converge.It has a maximum at (OEIS A133838), where(5)and inflection points at (OEIS A133839) and (OEIS A133840), where(6)

Rice distribution

where is a modified Bessel function of the first kind and . For a derivation, see Papoulis (1962). For = 0, this reduces to the Rayleigh distribution.

Zipf distribution

The Zipf distribution, sometimes referred to as the zeta distribution, is a discrete distribution commonly used in linguistics, insurance, and the modelling of rare events. It has probability density function(1)where is a positive parameter and is the Riemann zeta function, and distribution function(2)where is a generalized harmonic number.The Zipf distribution is implemented in the WolframLanguage as ZipfDistribution[rho].The th raw moment is(3)giving the mean and varianceas(4)(5)The distribution has mean deviation(6)where is a Hurwitz zeta function and is the mean as given above in equation (4).

Negative binomial distribution

The negative binomial distribution, also known as the Pascal distribution or Pólya distribution, gives the probability of successes and failures in trials, and success on the th trial. The probability density function is therefore given by(1)(2)(3)where is a binomial coefficient. The distribution function is then given by(4)(5)(6)where is the gamma function, is a regularized hypergeometric function, and is a regularized beta function.The negative binomial distribution is implemented in the Wolfram Language as NegativeBinomialDistribution[r, p].Defining(7)(8)the characteristic function is given by(9)and the moment-generating functionby(10)Since ,(11)(12)(13)(14)The raw moments are therefore(15)(16)(17)(18)where(19)and is the Pochhammer symbol. (Note that Beyer 1987, p. 487, apparently gives the mean incorrectly.)This gives the central moments as(20)(21)(22)The mean, variance, skewnessand..

Weibull distribution

The Weibull distribution is given by(1)(2)for , and is implemented in the Wolfram Language as WeibullDistribution[alpha, beta]. The raw moments of the distribution are(3)(4)(5)(6)and the mean, variance, skewness, and kurtosis excess of are(7)(8)(9)(10)where is the gamma function and(11)A slightly different form of the distribution is defined by(12)(13)(Mendenhall and Sincich 1995). This has raw moments(14)(15)(16)(17)so the mean and variance forthis form are(18)(19)The Weibull distribution gives the distribution of lifetimes of objects. It was originally proposed to quantify fatigue data, but it is also used in analysis of systems involving a "weakest link."

Cauchy distribution

The Cauchy distribution, also called the Lorentzian distribution or Lorentz distribution, is a continuous distribution describing resonance behavior. It also describes the distribution of horizontal distances at which a line segment tilted at a random angle cuts the x-axis.Let represent the angle that a line, with fixed point of rotation, makes with the vertical axis, as shown above. Then(1)(2)(3)(4)so the distribution of angle is given by(5)This is normalized over all angles, since(6)and(7)(8)(9)The general Cauchy distribution and its cumulative distribution can be written as(10)(11)where is the half width at half maximum and is the statistical median. In the illustration about, .The Cauchy distribution is implemented in the Wolfram Language as CauchyDistribution[m, Gamma/2].The characteristic function is(12)(13)The moments of the distribution are undefined since the integrals(14)diverge for .If and are variates with..

Maxwell distribution

The Maxwell (or Maxwell-Boltzmann) distribution gives the distribution of speeds of molecules in thermal equilibrium as given by statistical mechanics. Defining , where is the Boltzmann constant, is the temperature, is the mass of a molecule, and letting denote the speed a molecule, the probability and cumulative distributions over the range are(1)(2)(3)using the form of Papoulis (1984), where is an incomplete gamma function and is erf. Spiegel (1992) and von Seggern (1993) each use slightly different definitions of the constant .It is implemented in the Wolfram Languageas MaxwellDistribution[sigma].The th raw moment is(4)giving the first few as(5)(6)(7)(8)(Papoulis 1984, p. 149).The mean, variance, skewness,and kurtosis excess are therefore given by(9)(10)(11)(12)The characteristic function is(13)where is the erfi function...

Uniform distribution

A uniform distribution, sometimes also known as a rectangular distribution, is a distribution that has constant probability.The probability density function and cumulative distribution function for a continuous uniform distribution on the interval are(1)(2)These can be written in terms of the Heaviside step function as(3)(4)the latter of which simplifies to the expected for .The continuous distribution is implemented as UniformDistribution[a,b].For a continuous uniform distribution, the characteristicfunction is(5)If and , the characteristic function simplifies to(6)(7)The moment-generating function is(8)(9)(10)and(11)(12)The moment-generating function is not differentiable at zero, but the moments can be calculated by differentiating and then taking . The raw moments are given analytically by(13)(14)(15)The first few are therefore given explicitly by(16)(17)(18)(19)The central moments are given analytically..

Extreme value distribution

There are essentially three types of Fisher-Tippett extreme value distributions. The most common is the type I distribution, which are sometimes referred to as Gumbel types or just Gumbel distributions. These are distributions of an extreme order statistic for a distribution of elements .The Fisher-Tippett distribution corresponding to a maximum extreme value distribution (i.e., the distribution of the maximum ), sometimes known as the log-Weibull distribution, with location parameter and scale parameter is implemented in the Wolfram Language as ExtremeValueDistribution[alpha, beta].It has probability density functionand distribution function(1)(2)The moments can be computed directly by defining(3)(4)(5)Then the raw moments are(6)(7)(8)(9)(10)(11)where are Euler-Mascheroni integrals. Plugging in the Euler-Mascheroni integrals gives(12)(13)(14)(15)(16)where is the Euler-Mascheroni constant and is Apéry's..

Triangular distribution

The triangular distribution is a continuous distribution defined on the range with probability density function(1)and distribution function(2)where is the mode.The symmetric triangular distribution on is implemented in the Wolfram Language as TriangularDistribution[a, b], and the triangular distribution on with mode as TriangularDistribution[a, b, c].The mean is(3)the raw moments are(4)(5)and the central moments are(6)(7)(8)It has skewness and kurtosisexcess given by(9)(10)

Exponential distribution

Given a Poisson distribution with rate of change , the distribution of waiting times between successive changes (with ) is(1)(2)(3)and the probability distribution function is(4)It is implemented in the Wolfram Languageas ExponentialDistribution[lambda].The exponential distribution is the only continuous memoryless random distribution. It is a continuous analog of the geometric distribution.This distribution is properly normalized since(5)The raw moments are given by(6)the first few of which are therefore 1, , , , , .... Similarly, the central moments are(7)(8)where is an incomplete gamma function and is a subfactorial, giving the first few as 1, 0, , , , , ... (OEIS A000166).The mean, variance, skewness,and kurtosis excess are therefore(9)(10)(11)(12)The characteristic function is(13)(14)where is the Heaviside step function and is the Fourier transform with parameters .If a generalized exponential probability function..

Logistic distribution

The continuous distribution with parameters and having probability and distribution functions(1)(2)(correcting the sign error in von Seggern 1993, p. 250). The distribution function is similar in form to the solution to the continuous logistic equation(3)giving the distribution its name.The logistic distribution is implemented in the Wolfram Language as LogisticDistribution[mu, beta].The mean, variance, skewness,and kurtosis excess are(4)(5)(6)(7)

Log normal distribution

A continuous distribution in which the logarithm of a variable has a normal distribution. It is a general case of Gibrat's distribution, to which the log normal distribution reduces with and . A log normal distribution results if the variable is the product of a large number of independent, identically-distributed variables in the same way that a normal distribution results if the variable is the sum of a large number of independent, identically-distributed variables.The probability density and cumulative distribution functions for the log normal distribution are(1)(2)where is the erf function.It is implemented in the Wolfram Language as LogNormalDistribution[mu, sigma].This distribution is normalized, since letting gives and , so(3)The raw moments are(4)(5)(6)(7)and the central moments are(8)(9)(10)Therefore, the mean, variance, skewness, and kurtosis excess are given by(11)(12)(13)(14)These can be found by direct integration(15)(16)(17)and..

Rayleigh distribution

The distribution with probability densityfunction and distribution function(1)(2)for and parameter .It is implemented in the Wolfram Languageas RayleighDistribution[s].The raw moments are given by(3)where is the gamma function, giving the first few as(4)(5)(6)(7)(8)The central moments are therefore(9)(10)(11)The mean, variance, skewness,and kurtosis excess are(12)(13)(14)(15)The characteristic function is(16)

Inverse gaussian distribution

The inverse Gaussian distribution, also known as the Wald distribution, is the distribution over with probability density function and distribution function given by(1)(2)where is the mean and is a scaling parameter.The inverse Gaussian distribution is implemented in the Wolfram Language as InverseGaussianDistribution[mu, lambda].The th raw moment is given by(3)where is a modified Bessel function of the second kind, giving the first few as(4)(5)(6)Using gives a recursion relation for the raw moments as(7)The first few central moments are(8)(9)(10)The cumulants are given by(11)The variance, skewness,and kurtosis excess are given by(12)(13)(14)

Chi distribution

The chi distribution with degrees of freedom is the distribution followed by the square root of a chi-squared random variable. For , the distribution is a half-normal distribution with . For , it is a Rayleigh distribution with . The chi distribution is implemented in the Wolfram Language as ChiDistribution[n].The probability density function and distribution function for this distribution are(1)(2)where is a regularized gamma function.The th raw moment is(3)(Johnson et al. 1994, p. 421; Evans et al. 2000, p. 57; typocorrected), giving the first few as(4)(5)(6)(7)The mean, variance, skewness,and kurtosis excess are given by(8)(9)(10)(11)

Bernoulli distribution

The Bernoulli distribution is a discrete distribution having two possible outcomes labelled by and in which ("success") occurs with probability and ("failure") occurs with probability , where . It therefore has probability density function(1)which can also be written(2)The corresponding distribution functionis(3)The Bernoulli distribution is implemented in the WolframLanguage as BernoulliDistribution[p].The performance of a fixed number of trials with fixed probability of success oneach trial is known as a Bernoulli trial.The distribution of heads and tails in coin tossing is an example of a Bernoulli distribution with . The Bernoulli distribution is the simplest discrete distribution, and it the building block for other more complicated discrete distributions. The distributions of a number of variate types defined based on sequences of independent Bernoulli trials that are curtailed in some way..

Pareto distribution

The distribution with probability density function and distribution function(1)(2)defined over the interval .It is implemented in the Wolfram Language as ParetoDistribution[k, alpha].The th raw moment is(3)for , giving the first few as(4)(5)(6)(7)The th central moment is(8)(9)for and where is a gamma function, is a regularized hypergeometric function, and is a beta function, giving the first few as(10)(11)(12)The mean, variance, skewness,and kurtosis excess are therefore(13)(14)(15)(16)

Gumbel distribution

There are essentially three types of Fisher-Tippett extreme value distributions. The most common is the type I distribution, which are sometimes referred to as Gumbel types or just Gumbel distributions. These are distributions of an extreme order statistic for a distribution of elements . In this work, the term "Gumbel distribution" is used to refer to the distribution corresponding to a minimum extreme value distribution (i.e., the distribution of the minimum ).The Gumbel distribution with location parameter and scale parameter is implemented in the Wolfram Language as GumbelDistribution[alpha, beta].It has probability density functionand distribution function(1)(2)The mean, variance, skewness,and kurtosis excess are(3)(4)(5)(6)where is the Euler-Mascheroni constant and is Apéry's constant.The distribution of taken from a continuous uniform distribution over the unit interval has probability function(7)and..

Normal distribution

A normal distribution in a variate with mean and variance is a statistic distribution with probability density function(1)on the domain . While statisticians and mathematicians uniformly use the term "normal distribution" for this distribution, physicists sometimes call it a Gaussian distribution and, because of its curved flaring shape, social scientists refer to it as the "bell curve." Feller (1968) uses the symbol for in the above equation, but then switches to in Feller (1971).de Moivre developed the normal distribution as an approximation to the binomial distribution, and it was subsequently used by Laplace in 1783 to study measurement errors and by Gauss in 1809 in the analysis of astronomical data (Havil 2003, p. 157).The normal distribution is implemented in the Wolfram Language as NormalDistribution[mu, sigma].The so-called "standard normal distribution" is given by taking and..

Geometric distribution

The geometric distribution is a discrete distribution for , 1, 2, ... having probability density function(1)(2)where , , and distribution function is(3)(4)The geometric distribution is the only discrete memoryless random distribution. It is a discrete analog of the exponential distribution.Note that some authors (e.g., Beyer 1987, p. 531; Zwillinger 2003, pp. 630-631) prefer to define the distribution instead for , 2, ..., while the form of the distribution given above is implemented in the Wolfram Language as GeometricDistribution[p]. is normalized, since(5)The raw moments are given analytically in terms ofthe polylogarithm function,(6)(7)(8)This gives the first few explicitly as(9)(10)(11)(12)The central moments are given analytically in termsof the Lerch transcendent as(13)(14)This gives the first few explicitly as(15)(16)(17)(18)(19)so the mean, variance, skewness,and kurtosis excess are given..

Gamma distribution

A gamma distribution is a general type of statistical distribution that is related to the beta distribution and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. Gamma distributions have two free parameters, labeled and , a few of which are illustrated above.Consider the distribution function of waiting times until the th Poisson event given a Poisson distribution with a rate of change ,(1)(2)(3)(4)(5)for , where is a complete gamma function, and an incomplete gamma function. With an integer, this distribution is a special case known as the Erlang distribution.The corresponding probability function of waiting times until the th Poisson event is then obtained by differentiating ,(6)(7)(8)(9)(10)(11)(12)Now let (not necessarily an integer) and define to be the time between changes. Then the above equation can be written(13)for . This is the probability function for the gamma..

Subscribe to our updates
79 345 subscribers already with us
Math Subcategories
Check the price
for your project