Sort by:

The ratio of independent normally distributed variates with zero mean is distributed with a Cauchy distribution. This can be seen as follows. Let and both have mean 0 and standard deviations of and , respectively, then the joint probability density function is the bivariate normal distribution with ,(1)From ratio distribution, the distribution of is(2)(3)(4)But(5)so(6)(7)(8)which is a Cauchy distribution.A more direct derivative proceeds from integration of(9)(10)where is a delta function.

The distribution of a product of two normally distributed variates and with zero means and variances and is given by(1)(2)where is a delta function and is a modified Bessel function of the second kind. This distribution is plotted above in red.The analogous expression for a product of three normal variates can be given in termsof Meijer G-functions as(3)plotted above in blue.

A normalized form of the cumulative normal distribution function giving the probability that a variate assumes a value in the range ,(1)It is related to the probability integral(2)by(3)Let so . Then(4)Here, erf is a function sometimes called the error function. The probability that a normal variate assumes a value in the range is therefore given by(5)Neither nor erf can be expressed in terms of finite additions, subtractions, multiplications, and root extractions, and so must be either computed numerically or otherwise approximated.Note that a function different from is sometimes defined as "the" normal distribution function(6)(7)(8)(9)(Feller 1968; Beyer 1987, p. 551), although this function is less widely encountered than the usual . The notation is due to Feller (1971).The value of for which falls within the interval with a given probability is a related quantity called the confidence interval.For small values..

A continuous distribution defined on the range with probability density function(1)where is a modified Bessel function of the first kind of order 0, and distribution function(2)which cannot be done in closed form. Here, is the mean direction and is a concentration parameter. The von Mises distribution is the circular analog of the normal distribution on a line.The mean is(3)and the circular variance is(4)

Amazingly, the distribution of a difference of two normally distributed variates and with means and variances and , respectively, is given by(1)(2)where is a delta function, which is another normal distribution having mean(3)and variance(4)

The distribution for the sum of uniform variates on the interval can be found directly as(1)where is a delta function.A more elegant approach uses the characteristicfunction to obtain(2)where the Fourier parameters are taken as . The first few values of are then given by(3)(4)(5)(6)illustrated above.Interestingly, the expected number of picks of a number from a uniform distribution on so that the sum exceeds 1 is e (Derbyshire 2004, pp. 366-367). This can be demonstrated by noting that the probability of the sum of variates being greater than 1 while the sum of variates being less than 1 is(7)(8)(9)The values for , 2, ... are 0, 1/2, 1/3, 1/8, 1/30, 1/144, 1/840, 1/5760, 1/45360, ... (OEIS A001048). The expected number of picks needed to first exceed 1 is then simply(10)It is more complicated to compute the expected number of picks that is needed for their sum to first exceed 2. In this case,(11)(12)The first few terms are therefore 0, 0, 1/6,..

The ratio of uniform variates and on the interval can be found directly as(1)(2)where is a delta function and is the Heaviside step function.The distribution is normalized, but its mean and moments diverge.

The distribution of the product of uniform variates on the interval can be found directly as(1)(2)where is a delta function. The distributions are plotted above for (red), (yellow), and so on.

The difference of two uniform variates on the interval can be found as(1)(2)where is a delta function and is the Heaviside step function.

A normal distribution with mean0,(1)The characteristic function is(2)The mean, variance, skewness,and kurtosis excess are(3)(4)(5)(6)The cumulants are(7)(8)(9)for .

Given a Poisson distribution with a rate of change , the distribution function giving the waiting times until the th Poisson event is(1)(2)for , where is a complete gamma function, and an incomplete gamma function. With explicitly an integer, this distribution is known as the Erlang distribution, and has probability function(3)It is closely related to the gamma distribution, which is obtained by letting (not necessarily an integer) and defining . When , it simplifies to the exponential distribution.Evans et al. (2000, p. 71) write the distribution using the variables and .

A theorem proved by Doob (1942) which states that any random process which is both normal and Markov has the following forms for its correlation function , spectral density , and probability densities and :(1)(2)(3)(4)where is the mean, the standard deviation, and the relaxation time.

If and are the observed proportions from standard normally distributed samples with proportion of success , then the probability that(1)will be as great as observed is(2)where(3)(4)(5)Here, is the unbiased estimator. The skewness and kurtosis excess of this distribution are(6)(7)

A standard normal distribution is a normal distribution with zero mean () and unit variance (), given by the probability density function and distribution function(1)(2)over the domain .It has mean, variance, skewness,and kurtosis excess given by(3)(4)(5)(6)The first quartile of the standard normal distribution occurs when , which is(7)(8)(OEIS A092678; Kenney and Keeping 1962, p. 134), where is the inverse erf function. The absolute value of this is known as the probable error.

The logarithmic distribution is a continuous distribution for a variate with probability function(1)and distribution function(2)It therefore applies to a variable distributed as , and has appropriate normalization.Note that the log-series distribution is sometimes also known as the logarithmic distribution, and the distribution arising in Benford's law is also "a" logarithmic distribution.The raw moments are given by(3)The mean is therefore(4)The variance, skewness,and kurtosis excess are slightly complicated expressions.

The distribution is defined in terms of its distribution function as the solution to the initial value problemwhere (Savageau 1982, Aksenov and Savageau 2001). It has four free parameters: , , , and .The distribution is capable of approximating many central and noncentral unimodal univariate distributions rather well (Voit 1991), but also includes the exponential, logistic, uniform and linear distributions as special cases. The S distribution derives its name from the fact that it is based on the theory of S-systems (Savageau 1976, Voit 1991, Aksenov and Savageau 2001).

A correction to a discrete binomial distributionto approximate a continuous distribution.whereis a continuous variate with a normal distribution and is a variate of a binomial distribution.

The probability that a random sample from an infinite normally distributed universe will have a mean within a distance of the mean of the universe is(1)where is the normal distribution function and is the observed value of(2)The probable error is then defined as the value of such that , i.e.,(3)which is given by(4)(5)(OEIS A092678; Kenney and Keeping 1962, p. 134). Here, is the inverse erf function. The probability of a deviation from the true population value at least as great as the probable error is therefore 1/2.

Consider a bivariate normal distribution in variables and with covariance(1)and an arbitrary function . Then the expected value of the random variable (2)satisfies(3)

Gibrat's distribution is a continuous distribution in which the logarithm of a variable has a normal distribution,(1)defined over the interval . It is a special case of the log normal distribution(2)with and , and so has distribution function(3)The mean, variance, skewness,and kurtosis excess are then given by(4)(5)(6)(7)

A skewed distribution which is similar to the binomial distribution when (Abramowitz and Stegun 1972, p. 930).(1)for where(2)(3) is the gamma function, and is a standardized variate. Another form is(4)For this distribution, the characteristicfunction is(5)and the mean, variance, skewness, and kurtosis excess are(6)(7)(8)(9)

A system of equation types obtained by generalizing the differential equation forthe normal distribution(1)which has solution(2)to(3)which has solution(4)Let , be the roots of . Then the possible types of curves are 0. , . E.g., normal distribution. I. , . E.g., beta distribution. II. , , where . III. , , where . E.g., gamma distribution. This case is intermediate to cases I and VI. IV. , . V. , where . Intermediate to cases IV and VI. VI. , where is the larger root. E.g., beta prime distribution. VII. , , . E.g., Student's t-distribution. Classes IX-XII are discussed in Pearson (1916). See also Craig (in Kenney and Keeping 1951).If a Pearson curve possesses a mode, it will be at . Let at and , where these may be or . If also vanishes at , , then the th moment and th moments exist.(5)giving(6)(7)Now define the raw th moment by(8)so combining (7) with (8) gives(9)For ,(10)so(11)and for ,(12)so(13)Combining (11), (13), and the definitions(14)(15)obtained..

The Gaussian joint variable theorem, also called the multivariate theorem, states that given an even number of variates from a normal distribution with means all 0,(1)etc. Given an odd number of variates,(2)(3)etc.

A distribution with probability functionwhere is a beta function. The mode of a variate distributed as isIf is a variate, then is a variate. If is a variate, then and are and variates. If and are and variates, then is a variate. If and are variates, then is a variate.

Amazingly, the distribution of a sum of two normally distributed independent variates and with means and variances and , respectively is another normal distribution(1)which has mean(2)and variance(3)By induction, analogous results hold for the sum of normally distributed variates.An alternate derivation proceeds by noting that(4)(5)where is the characteristic function and is the inverse Fourier transform, taken with parameters .More generally, if is normally distributed with mean and variance , then a linear function of ,(6)is also normally distributed. The new distribution has mean and variance , as can be derived using the moment-generating function(7)(8)(9)(10)(11)which is of the standard form with(12)(13)For a weighted sum of independent variables(14)the expectation is given by(15)(16)(17)(18)(19)Setting this equal to(20)gives(21)(22)Therefore, the mean and variance of the weighted sums of random variables..

The Laplace distribution, also called the double exponential distribution, is the distribution of differences between two independent variates with identical exponential distributions (Abramowitz and Stegun 1972, p. 930). It had probability density function and cumulative distribution functions given by(1)(2)It is implemented in the Wolfram Language as LaplaceDistribution[mu, beta].The moments about the mean are related to the moments about 0 by(3)where is a binomial coefficient, so(4)(5)where is the floor function and is the gamma function.The moments can also be computed using the characteristicfunction,(6)Using the Fourier transform ofthe exponential function(7)gives(8)(Abramowitz and Stegun 1972, p. 930). The momentsare therefore(9)The mean, variance, skewness,and kurtosis excess are(10)(11)(12)(13)..

A general type of statistical distribution which is related to the gamma distribution. Beta distributions have two free parameters, which are labeled according to one of two notational conventions. The usual definition calls these and , and the other uses and (Beyer 1987, p. 534). The beta distribution is used as a prior distribution for binomial proportions in Bayesian analysis (Evans et al. 2000, p. 34). The above plots are for various values of with and ranging from 0.25 to 3.00.The domain is , and the probability function and distribution function are given by(1)(2)(3)where is the beta function, is the regularized beta function, and . The beta distribution is implemented in the Wolfram Language as BetaDistribution[alpha, beta].The distribution is normalized since(4)The characteristic function is(5)(6)where is a confluent hypergeometric function of the first kind.The raw moments are given by(7)(8)(Papoulis 1984,..

Planck's's radiation function is the function(1)which is normalized so that(2)However, the function is sometimes also defined without the numerical normalization factor of (e.g., Abramowitz and Stegun 1972, p. 999).The first and second raw moments are(3)(4)where is Apéry's constant, but higher order raw moments do not exist since the corresponding integrals do not converge.It has a maximum at (OEIS A133838), where(5)and inflection points at (OEIS A133839) and (OEIS A133840), where(6)

where is a modified Bessel function of the first kind and . For a derivation, see Papoulis (1962). For = 0, this reduces to the Rayleigh distribution.

If for , ..., has a multivariate normal distribution with mean vector and covariance matrix , and denotes the matrix composed of the row vectors , then the matrix has a Wishart distribution with scale matrix and degrees of freedom parameter . The Wishart distribution is most typically used when describing the covariance matrix of multinormal samples. The Wishart distribution is implemented as WishartDistribution[sigma, m] in the Wolfram Language package MultivariateStatistics` .

The Weibull distribution is given by(1)(2)for , and is implemented in the Wolfram Language as WeibullDistribution[alpha, beta]. The raw moments of the distribution are(3)(4)(5)(6)and the mean, variance, skewness, and kurtosis excess of are(7)(8)(9)(10)where is the gamma function and(11)A slightly different form of the distribution is defined by(12)(13)(Mendenhall and Sincich 1995). This has raw moments(14)(15)(16)(17)so the mean and variance forthis form are(18)(19)The Weibull distribution gives the distribution of lifetimes of objects. It was originally proposed to quantify fatigue data, but it is also used in analysis of systems involving a "weakest link."

The Cauchy distribution, also called the Lorentzian distribution or Lorentz distribution, is a continuous distribution describing resonance behavior. It also describes the distribution of horizontal distances at which a line segment tilted at a random angle cuts the x-axis.Let represent the angle that a line, with fixed point of rotation, makes with the vertical axis, as shown above. Then(1)(2)(3)(4)so the distribution of angle is given by(5)This is normalized over all angles, since(6)and(7)(8)(9)The general Cauchy distribution and its cumulative distribution can be written as(10)(11)where is the half width at half maximum and is the statistical median. In the illustration about, .The Cauchy distribution is implemented in the Wolfram Language as CauchyDistribution[m, Gamma/2].The characteristic function is(12)(13)The moments of the distribution are undefined since the integrals(14)diverge for .If and are variates with..

A multivariate normal distribution in three variables. It has probability density function(1)where(2)The standardized trivariate normal distribution takes unit variances and . The quadrant probability in this special case is then given analytically by(3)(Rose and Smith 1996; Stuart and Ord 1998; Rose and Smith 2002, p. 231).

A -variate multivariate normal distribution (also called a multinormal distribution) is a generalization of the bivariate normal distribution. The -multivariate distribution with mean vector and covariance matrix is denoted . The multivariate normal distribution is implemented as MultinormalDistribution[mu1, mu2, ..., sigma11, sigma12, ..., sigma12, sigma22, ..., ..., x1, x2, ...] in the Wolfram Language package MultivariateStatistics` (where the matrix must be symmetric since ).In the case of nonzero correlations, there is in general no closed-form solution for the distribution function of a multivariate normal distribution. As a result, such computations must be done numerically.

The bivariate normal distribution is the statistical distribution with probabilitydensity function(1)where(2)and(3)is the correlation of and (Kenney and Keeping 1951, pp. 92 and 202-205; Whittaker and Robinson 1967, p. 329) and is the covariance.The probability density function of the bivariate normal distribution is implemented as MultinormalDistribution[mu1, mu2, sigma11, sigma12, sigma12, sigma22] in the Wolfram Language package MultivariateStatistics` .The marginal probabilities are then(4)(5)and(6)(7)(Kenney and Keeping 1951, p. 202).Let and be two independent normal variates with means and for , 2. Then the variables and defined below are normal bivariates with unit variance and correlation coefficient :(8)(9)To derive the bivariate normal probability function, let and be normally and independently distributed variates with mean 0 and variance 1, then define(10)(11)(Kenney and Keeping..

The Maxwell (or Maxwell-Boltzmann) distribution gives the distribution of speeds of molecules in thermal equilibrium as given by statistical mechanics. Defining , where is the Boltzmann constant, is the temperature, is the mass of a molecule, and letting denote the speed a molecule, the probability and cumulative distributions over the range are(1)(2)(3)using the form of Papoulis (1984), where is an incomplete gamma function and is erf. Spiegel (1992) and von Seggern (1993) each use slightly different definitions of the constant .It is implemented in the Wolfram Languageas MaxwellDistribution[sigma].The th raw moment is(4)giving the first few as(5)(6)(7)(8)(Papoulis 1984, p. 149).The mean, variance, skewness,and kurtosis excess are therefore given by(9)(10)(11)(12)The characteristic function is(13)where is the erfi function...

A uniform distribution, sometimes also known as a rectangular distribution, is a distribution that has constant probability.The probability density function and cumulative distribution function for a continuous uniform distribution on the interval are(1)(2)These can be written in terms of the Heaviside step function as(3)(4)the latter of which simplifies to the expected for .The continuous distribution is implemented as UniformDistribution[a,b].For a continuous uniform distribution, the characteristicfunction is(5)If and , the characteristic function simplifies to(6)(7)The moment-generating function is(8)(9)(10)and(11)(12)The moment-generating function is not differentiable at zero, but the moments can be calculated by differentiating and then taking . The raw moments are given analytically by(13)(14)(15)The first few are therefore given explicitly by(16)(17)(18)(19)The central moments are given analytically..

There are essentially three types of Fisher-Tippett extreme value distributions. The most common is the type I distribution, which are sometimes referred to as Gumbel types or just Gumbel distributions. These are distributions of an extreme order statistic for a distribution of elements .The Fisher-Tippett distribution corresponding to a maximum extreme value distribution (i.e., the distribution of the maximum ), sometimes known as the log-Weibull distribution, with location parameter and scale parameter is implemented in the Wolfram Language as ExtremeValueDistribution[alpha, beta].It has probability density functionand distribution function(1)(2)The moments can be computed directly by defining(3)(4)(5)Then the raw moments are(6)(7)(8)(9)(10)(11)where are Euler-Mascheroni integrals. Plugging in the Euler-Mascheroni integrals gives(12)(13)(14)(15)(16)where is the Euler-Mascheroni constant and is Apéry's..

The triangular distribution is a continuous distribution defined on the range with probability density function(1)and distribution function(2)where is the mode.The symmetric triangular distribution on is implemented in the Wolfram Language as TriangularDistribution[a, b], and the triangular distribution on with mode as TriangularDistribution[a, b, c].The mean is(3)the raw moments are(4)(5)and the central moments are(6)(7)(8)It has skewness and kurtosisexcess given by(9)(10)

Given a Poisson distribution with rate of change , the distribution of waiting times between successive changes (with ) is(1)(2)(3)and the probability distribution function is(4)It is implemented in the Wolfram Languageas ExponentialDistribution[lambda].The exponential distribution is the only continuous memoryless random distribution. It is a continuous analog of the geometric distribution.This distribution is properly normalized since(5)The raw moments are given by(6)the first few of which are therefore 1, , , , , .... Similarly, the central moments are(7)(8)where is an incomplete gamma function and is a subfactorial, giving the first few as 1, 0, , , , , ... (OEIS A000166).The mean, variance, skewness,and kurtosis excess are therefore(9)(10)(11)(12)The characteristic function is(13)(14)where is the Heaviside step function and is the Fourier transform with parameters .If a generalized exponential probability function..

The continuous distribution with parameters and having probability and distribution functions(1)(2)(correcting the sign error in von Seggern 1993, p. 250). The distribution function is similar in form to the solution to the continuous logistic equation(3)giving the distribution its name.The logistic distribution is implemented in the Wolfram Language as LogisticDistribution[mu, beta].The mean, variance, skewness,and kurtosis excess are(4)(5)(6)(7)

A continuous distribution in which the logarithm of a variable has a normal distribution. It is a general case of Gibrat's distribution, to which the log normal distribution reduces with and . A log normal distribution results if the variable is the product of a large number of independent, identically-distributed variables in the same way that a normal distribution results if the variable is the sum of a large number of independent, identically-distributed variables.The probability density and cumulative distribution functions for the log normal distribution are(1)(2)where is the erf function.It is implemented in the Wolfram Language as LogNormalDistribution[mu, sigma].This distribution is normalized, since letting gives and , so(3)The raw moments are(4)(5)(6)(7)and the central moments are(8)(9)(10)Therefore, the mean, variance, skewness, and kurtosis excess are given by(11)(12)(13)(14)These can be found by direct integration(15)(16)(17)and..

The distribution with probability densityfunction and distribution function(1)(2)for and parameter .It is implemented in the Wolfram Languageas RayleighDistribution[s].The raw moments are given by(3)where is the gamma function, giving the first few as(4)(5)(6)(7)(8)The central moments are therefore(9)(10)(11)The mean, variance, skewness,and kurtosis excess are(12)(13)(14)(15)The characteristic function is(16)

The inverse Gaussian distribution, also known as the Wald distribution, is the distribution over with probability density function and distribution function given by(1)(2)where is the mean and is a scaling parameter.The inverse Gaussian distribution is implemented in the Wolfram Language as InverseGaussianDistribution[mu, lambda].The th raw moment is given by(3)where is a modified Bessel function of the second kind, giving the first few as(4)(5)(6)Using gives a recursion relation for the raw moments as(7)The first few central moments are(8)(9)(10)The cumulants are given by(11)The variance, skewness,and kurtosis excess are given by(12)(13)(14)

The chi distribution with degrees of freedom is the distribution followed by the square root of a chi-squared random variable. For , the distribution is a half-normal distribution with . For , it is a Rayleigh distribution with . The chi distribution is implemented in the Wolfram Language as ChiDistribution[n].The probability density function and distribution function for this distribution are(1)(2)where is a regularized gamma function.The th raw moment is(3)(Johnson et al. 1994, p. 421; Evans et al. 2000, p. 57; typocorrected), giving the first few as(4)(5)(6)(7)The mean, variance, skewness,and kurtosis excess are given by(8)(9)(10)(11)

The distribution with probability density function and distribution function(1)(2)defined over the interval .It is implemented in the Wolfram Language as ParetoDistribution[k, alpha].The th raw moment is(3)for , giving the first few as(4)(5)(6)(7)The th central moment is(8)(9)for and where is a gamma function, is a regularized hypergeometric function, and is a beta function, giving the first few as(10)(11)(12)The mean, variance, skewness,and kurtosis excess are therefore(13)(14)(15)(16)

There are essentially three types of Fisher-Tippett extreme value distributions. The most common is the type I distribution, which are sometimes referred to as Gumbel types or just Gumbel distributions. These are distributions of an extreme order statistic for a distribution of elements . In this work, the term "Gumbel distribution" is used to refer to the distribution corresponding to a minimum extreme value distribution (i.e., the distribution of the minimum ).The Gumbel distribution with location parameter and scale parameter is implemented in the Wolfram Language as GumbelDistribution[alpha, beta].It has probability density functionand distribution function(1)(2)The mean, variance, skewness,and kurtosis excess are(3)(4)(5)(6)where is the Euler-Mascheroni constant and is Apéry's constant.The distribution of taken from a continuous uniform distribution over the unit interval has probability function(7)and..

A normal distribution in a variate with mean and variance is a statistic distribution with probability density function(1)on the domain . While statisticians and mathematicians uniformly use the term "normal distribution" for this distribution, physicists sometimes call it a Gaussian distribution and, because of its curved flaring shape, social scientists refer to it as the "bell curve." Feller (1968) uses the symbol for in the above equation, but then switches to in Feller (1971).de Moivre developed the normal distribution as an approximation to the binomial distribution, and it was subsequently used by Laplace in 1783 to study measurement errors and by Gauss in 1809 in the analysis of astronomical data (Havil 2003, p. 157).The normal distribution is implemented in the Wolfram Language as NormalDistribution[mu, sigma].The so-called "standard normal distribution" is given by taking and..

A gamma distribution is a general type of statistical distribution that is related to the beta distribution and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. Gamma distributions have two free parameters, labeled and , a few of which are illustrated above.Consider the distribution function of waiting times until the th Poisson event given a Poisson distribution with a rate of change ,(1)(2)(3)(4)(5)for , where is a complete gamma function, and an incomplete gamma function. With an integer, this distribution is a special case known as the Erlang distribution.The corresponding probability function of waiting times until the th Poisson event is then obtained by differentiating ,(6)(7)(8)(9)(10)(11)(12)Now let (not necessarily an integer) and define to be the time between changes. Then the above equation can be written(13)for . This is the probability function for the gamma..

Check the price

for your project

for your project

we accept

Money back

guarantee

guarantee

Price calculator

We've got the best prices, check out yourself!