A binomial distribution is a frequency distribution of the possible number of successful outcomes in a given number of trials in each of which there can be a similar probability of success. In probability theory as well as in statistics, the binomial distribution with parameters like *n* and *p* is the discrete probability distribution of the number of successes in the sequence of independent yes or no for *n* experiments with each of the experiments yielding success with probability *p*

A success or failure experiment are also referred to as a Bernoulli experiment or trial, when n equals 1, therefore, the binomial distribution is also a Bernoulli distribution. The binomial distribution is the basis for most of the popular binomial test of statistical significance

The binomial distribution can sometimes be used to model the amount of successes in any sample of size *n* drawn with replacement within the population of another size *N*. If the experiments are carried out without any form of replacement, the draws cannot dependent

Therefore, the resulting distribution is referred to as a hypergeometric distribution rather than being called a binomial distribution. However, for *N* that is greater than *n*, the binomial distribution is a good approximation, and widely used

In addition to the above explanation, the binomial distribution can as well be used in describing the behavior of a count variable *X* if one the below conditions apply:

- The amount of the observations
*n*is fixed in a binomial distribution - Each observation can be independent for a binomial distribution
- Each observation also represents one of two outcomes which can either be a success or on the other hand a failure in a binomial distribution
- The probability of a success
*p*is similar for individual outcome in any binomial distribution - If all of the conditions above are not met, then
*X*has a binomial distribution with both parameters*n*and*p*, i.e. B(n,p).

The binomial distribution for a random variable *X* with both parameters *n* and *p* denotes the sum of *n* each independent variable represented by letter *Z* that may presume the values 0 or 1 respectively. If the probability that for each *Z* variable presumes the value 1 = p, then the mean of each variable equal: 1*p + 0*(1-p) = p, while its variance equals: p(1-p). With the use of addition properties for some independent random variables, the mean likewise variance of the binomial distribution is equal to the total of the means as well as variances of the *n* independent *Z* variables, so we have something like this:

- µx = np
- σx2=np(1-p)

These definitions are logical. Imagine, for instance, 8 flips of a coin. If at any point during the flipping of coin the coin is fair, it means that p = 0.5. You would expect the mean amount of heads to be half the flips, otherwise, np = 8*0.5 = 4. The variance is likewise equal to np(1-p) = 8*0.5*0.5 = 2

If we know that the count *X* of successes in any group of *n* observations with the success probability *p* has a binomial distribution with mean *np* and variance np(1-p), then we will be able to get the information about the distribution of the sample proportion, the count of successes *X* divide by the amount of observations *n*. By the multiplicative properties of the mean, the mean of the distribution of Xn is equal to the mean of *X* divide by n*,* or npn = p. The variance of Xn is equal to the variance of X divided by n², or (np(1-p))/n² = (p(1-p))/n. This formula shows that if the size of the sample increases then the variance decreases

In the example of rolling a die with 6-sides 20 times, the probability p of rolling a six on any roll is 16, while the count X of 6s has a B (20, 16) distribution. The mean of this distribution is 206 = 3.33, and the variance is 20* 16* 56 = 10036 = 2.78. The proportion of mean when trying to get 6s when tossed 20 times, x20, is equal to p = 16 = 0.167, while the variance of such proportion equals (16 * 56)/20 = 0.007

For large values of n, the distributions of the count X and the proportion of sample are normal only after approximation. This result follows from the central limit theorem. The mean as well as variance for the normal distribution of X are np and np(1-p), similar to the mean plus the variance of the binomial distribution. Similarly, the mean and variance for the approximately normal distribution of the proportion of the samples are p and (p(1-p)/n). Because the normal approximation is not accurate for small values of n. Therefore, it is advisable to use normal approximation but only if np>10 and np(1-p)>10

For example, consider a number of voters in a particular state. The true proportion of voters that the candidate A was favored by equals 0.40. In a sample of voters that is 200 in number, what will be the probability of more than half of the them will support candidate A:

- The count X of voters among the sample of 200 who will support candidate A is distributed in this format: B (200,0.4). The mean of the distribution is equal to 200*0.4 = 80, while the variance equals 200*0.4*0.6 which gives 48. Therefore, the standard deviation is the square root of the variance, 6.93. Furthermore, the probability that more than half of the voters is going to support candidate A equals the probability that X will be greater than 100, which is equal to 1- P(X< 100)
- In order to use the normal approximation to calculate this probability, we should first admit that the continuous nature of a normal distribution and while we use the continuity correction. This means that the probability for a single separate value, such as 100, is extended to the probability of the interval (99.5,100.5). But the fact that we are only interested in the probability that X is less than or equal to 100, the normal approximation involves the upper limit of the interval, 100.5. If we were interested in the probability that X is strictly less than a 100, then we would apply the normal approximation to the lower end of the interval, 99.5
- Therefore, applying the correction of continuity as well as the standardizing the variable X will give the following:

The binomial distribution can also be used when there are exactly two unique and outcomes of any experiments. These results are appropriately labeled success and/or failure depending on the outcome of the trials. The binomial distribution can also be used to obtain the probability of observing x successes in N trials with the probability of success on any individual experiments represented by letter p. The binomial distribution presumes that the repeated experiments p is fixed for all trials. Therefore, the formula for the binomial probability mass function is:

- P (x; p, n) =(nx)(p)x(1−p) (n−x) for x=0,1,2, ⋯, n, where (nx)=n!x!(n−x)!

The binomial percent point function does not exist in closed form rather, It is calculated numerically. It is also important to note that because this is a discrete distribution that is only defined for integer values of x, the percent point function isn’t smooth in the way the percent point function should be for any continuous distribution. Therefore, the postulated formula for any binomial cumulative probability function is given below:

- F (x; p, n) = i=0x((n1) (p)I (1-p)(n-1)

Examples of completed orders

Special price
$5
/page

PLACE AN ORDER