An estimate is an educated guess for an unknown quantity or outcome based on known information. The making of estimates is an important part of statistics, since care is needed to provide as accurate an estimate as possible using as little input data as possible. Often, an estimate for the uncertainty of an estimate can also be determined statistically. A rule that tells how to calculate an estimate based on the measurements contained in a sample is called an estimator.
In most computer programs and computing environments, the precision of any calculation (even including addition) is limited by the word size of the computer, that is, by largest number that can be stored in one of the processor's registers. As of mid-2002, the most common processor word size is 32 bits, corresponding to the integer . General integer arithmetic on a 32-bit machine therefore allows addition of two 32-bit numbers to get 33 bits (one word plus an overflow bit), multiplication of two 32-bit numbers to get 64 bits (although the most prevalent programming language, C, cannot access the higher word directly and depends on the programmer to either create a machine language function or write a much slower function in C at a final overhead of about nine multiplies more), and division of a 64-bit number by a 32-bit number creating a 32-bit quotient and a 32-bit remainder/modulus.Arbitrary-precision arithmetic consists of a set of algorithms,..
When a number is expressed in scientific notation, the number of significant digits (or significant figures) is the number of digits needed to express the number to within the uncertainty of calculation. For example, if a quantity is known to be , four figures would be significantThe number of significant figures of a multiplication or division of two or more quantities is equal to the smallest number of significant figures for the quantities involved. For addition or subtraction, the number of significant figures is determined with the smallest significant figure of all the quantities involved. For example, the sum is 115.7574, but should be written 115.8 (with rounding), since the quantity 5.2 is significant only to .
Significance arithmetic is the arithmetic of approximate numerical quantities that not only keeps track of numerical results, but also uses error propagation to track their accuracy. In this way, numerical computations can carry accuracy and precision information with them, returning in the end a numerical quantity together with its estimated (or worst-case) uncertainty. Significance arithmetic is an extremely powerful technique that offers many advantages over fixed precision (most commonly, floating-point or integer arithmetic), and is implemented in high-end mathematical software and languages such as the Wolfram Language.
Let the true value of a quantity be and the measured or inferred value . Then the relative error is defined bywhere is the absolute error. The relative error of the quotient or product of a number of quantities is less than or equal to the sum of their relative errors. The percentage error is 100% times the relative error.
The number of digits used to perform a given computation. The concepts of accuracy and precision are both closely related and often confused. While the accuracy of a number is given by the number of significant decimal (or other) digits to the right of the decimal point in , the precision of is the total number of significant decimal (or other) digits.In many programming language, numerical computations are done with some fixedprecision.
The margin of error is an estimate of a confidenceinterval for a given measurement, result, etc. and is frequently cited in statistics.While phrases such as, "The poll has a margin of error of plus or minus 3.1 percentage points" are commonly heard, an additional qualification such as "at a 95 percent confidence level" is also needed in order to precisely indicate what the error refers to.For a given confidence interval , standard deviation , and sample size , the margin of error (for a normal distribution) iswhere is the inverse erf function.
Given a formula with an absolute error in of , the absolute error is . The relative error is . If , then(1)where denotes the mean, so the sample variance is given by(2)(3)The definitions of variance and covariancethen give(4)(5)(6)(where ), so(7)If and are uncorrelated, then so(8)Now consider addition of quantities with errors. For , and , so(9)For division of quantities with , and , so(10)Dividing through by and rearranging then gives(11)For exponentiation of quantities with(12)and(13)so(14)(15)If , then(16)For logarithms of quantities with , , so(17)(18)For multiplication with , and , so(19)(20)(21)For powers, with , , so(22)(23)
A confidence interval is an interval in which a measurement or trial falls corresponding to a given probability. Usually, the confidence interval of interest is symmetrically placed around the mean, so a 50% confidence interval for a symmetric probability density function would be the interval such that(1)For a normal distribution, the probability that a measurement falls within standard deviations () of the mean (i.e., within the interval ) is given by(2)(3)Now let , so . Then(4)(5)(6)where is the so-called erf function. The following table summarizes the probabilities that measurements from a normal distribution fall within for with small values of .0.68268950.95449970.99730020.99993660.9999994Conversely, to find the probability- confidence interval centered about the mean for a normal distribution in units of , solve equation (5) for to obtain(7)where is the inverse erf function. The following table then gives the values..
The degree to which a given quantity is correct and free from error. For example, a quantity specified as has an (absolute) accuracy of (meaning its true value can fall in the range 99-101), while a quantity specified as has a (relative) accuracy of (meaning its true value can fall in the range 98-102).The concepts of accuracy and precision are both closely related and often confused. While the accuracy of a number is given by the number of significant decimal (or other) digits to the right of the decimal point in , the precision of is the total number of significant decimal (or other) digits.
The difference between the measured or inferred value of a quantity and its actual value , given by(sometimes with the absolute value taken) is called the absolute error. The absolute error of the sum or difference of a number of quantities is less than or equal to the sum of their absolute errors.