Numerical methods

Control theory

Data visualization

Information theory

Inverse problems

Optimization

Population dynamics

Signal processing

Complex systems

Subscribe to our updates

79 345 subscribers already with us

Sort by:

The generalized minimal residual (GMRES) method (Saad and Schultz 1986) is an extension of the minimal residual method (MINRES), which is only applicable to symmetric systems, to unsymmetric systems. Like MINRES, it generates a sequence of orthogonal vectors, but in the absence of symmetry this can no longer be done with short recurrences; instead, all previously computed vectors in the orthogonal sequence have to be retained. For this reason, "restarted" versions of the method are used.In the conjugate gradient method, theresiduals form an orthogonal basis for the spaceIn GMRES, this basis is formed explicitly:The reader may recognize this as a modified Gram-Schmidt orthonormalization. Applied to the Krylov sequence this orthogonalization is called the "Arnoldi method" (Arnoldi 1951). The inner product coefficients and are stored in an upper Hessenberg matrix.The GMRES iterates are constructed aswhere..

The biconjugate gradient stabilized (BCGSTAB) method was developed to solve nonsymmetric linear systems while avoiding the often irregular convergence patterns of the conjugate gradient squared method (van der Vorst 1992). Instead of computing the conjugate gradient squared method sequence , BCGSTAB computes where is an th degree polynomial describing a steepest descent update.BCGSTAB often converges about as fast as the conjugate gradient squared method (CGS), sometimes faster and sometimes not. CGS can be viewed as a method in which the biconjugate gradient method (BCG) "contraction" operator is applied twice. BCGSTAB can be interpreted as the product of BCG and repeated application of the generalized minimal residual method. At least locally, a residual vector is minimized, which leads to a considerably smoother convergence behavior. On the other hand, if the local generalized minimal residual method step stagnates,..

The conjugate gradient method is not suitable for nonsymmetric systems because the residual vectors cannot be made orthogonal with short recurrences, as proved in Voevodin (1983) and Faber and Manteuffel (1984). The generalized minimal residual method retains orthogonality of the residuals by using long recurrences, at the cost of a larger storage demand. The biconjugate gradient method (BCG) takes another approach, replacing the orthogonal sequence of residuals by two mutually orthogonal sequences, at the price of no longer providing a minimization.The update relations for residuals in the conjugate gradient method are augmented in the biconjugate gradient method by relations that are similar but based on instead of . Thus we update two sequences of residuals(1)(2)and two sequences of search directions(3)(4)The choices(5)(6)ensure the orthogonality relations(7)if .Few theoretical results are known about the convergence..

In the biconjugate gradient method, the residual vector can be regarded as the product of and an th degree polynomial in , i.e.,(1)This same polynomial satisfies(2)so that(3)(4)(5)This suggests that if reduces to a smaller vector , then it might be advantageous to apply this "contraction" operator twice, and compute . The iteration coefficients can still be recovered from these vectors (as shown above), and it turns out to be easy to find the corresponding approximations for . This approach is the conjugate gradient squared (CGS) method (Sonneveld 1989).Often one observes a speed of convergence for CGS that is about twice as fast as for the biconjugate gradient method, which is in agreement with the observation that the same "contraction" operator is applied twice. However, there is no reason that the contraction operator, even if it really reduces the initial residual , should also reduce the once reduced vector . This..

The conjugate gradient method can be applied on the normal equations. The CGNE and CGNR methods are variants of this approach that are the simplest methods for nonsymmetric or indefinite systems. Since other methods for such systems are in general rather more complicated than the conjugate gradient method, transforming the system to a symmetric definite one and then applying the conjugate gradient method is attractive for its coding simplicity.CGNE solves the system(1)for and then computes the solution(2)CGNR solves(3)for the solution vector , where(4)If a system of linear equations has a nonsymmetric, possibly indefinite (but nonsingular) coefficient matrix, one obvious attempt at a solution is to apply the conjugate gradient method to a related symmetric positive definite system . While this approach is easy to understand and code, the convergence speed of the conjugate gradient method now depends on the square of the condition..

The conjugate gradient method can be viewed as a special variant of the Lanczos method for positive definite symmetric systems. The minimal residual method (MINRES) and symmetric LQ method (SYMMLQ) methods are variants that can be applied to symmetric indefinite systems.The vector sequences in the conjugate gradient method correspond to a factorization of a tridiagonal matrix similar to the coefficient matrix. Therefore, a breakdown of the algorithm can occur corresponding to a zero pivot if the matrix is indefinite. Furthermore, for indefinite matrices the minimization property of the conjugate gradient method is no longer well-defined. The MINRES methods is a variant of the conjugate gradient method that avoids the LU decomposition and does not suffer from breakdown. MINRES minimizes the residual in the 2-norm. The convergence behavior of the conjugate gradient and MINRES methods for indefinite systems was analyzed by Paige et..

Chebyshev iteration is a method for solving nonsymmetric problems (Golub and van Loan 1996, §10.1.5; Varga, 1962, Ch. 5). Chebyshev iteration avoids the computation of inner products as is necessary for the other nonstationary methods. For some distributed memory architectures these inner products are a bottleneck with respect to efficiency. The price one pays for avoiding inner products is that the method requires enough knowledge about the spectrum of the coefficient matrix that an ellipse enveloping the spectrum can be identified; this difficulty can be overcome, however, via an adaptive construction developed by Manteuffel (1977) and implemented by Ashby (1985). Chebyshev iteration is suitable for any nonsymmetric linear system for which the enveloping ellipse does not include the origin.Chebyshev iteration is similar to the conjugate gradient method except that no inner products are computed. Scalars and must..

The conjugate gradient method can be viewed as a special variant of the Lanczos method for positive definite symmetric systems. The minimal residual method and symmetric LQ method (SYMMLQ) are variants that can be applied to symmetric indefinite systems.The vector sequences in the conjugate gradient method correspond to a factorization of a tridiagonal matrix similar to the coefficient matrix. Therefore, a breakdown of the algorithm can occur corresponding to a zero pivot if the matrix is indefinite. Furthermore, for indefinite matrices the minimization property of the conjugate gradient method is no longer well-defined. The MINRES and SYMMLQ methods are variants of the CG method that avoid the LU decomposition and do not suffer from breakdown. SYMMLQ solves the projected system, but does not minimize anything (it keeps the residual orthogonal to all previous ones).When is not positive definite, but symmetric, we can still construct..

The Galton board, also known as a quincunx or bean machine, is a device for statistical experiments named after English scientist Sir Francis Galton. It consists of an upright board with evenly spaced nails (or pegs) driven into its upper half, where the nails are arranged in staggered order, and a lower half divided into a number of evenly-spaced rectangular slots. The front of the device is covered with a glass cover to allow viewing of both nails and slots. In the middle of the upper edge, there is a funnel into which balls can be poured, where the diameter of the balls must be much smaller than the distance between the nails. The funnel is located precisely above the central nail of the second row so that each ball, if perfectly centered, would fall vertically and directly onto the uppermost point of this nail's surface (Kozlov and Mitrofanova 2002). The figure above shows a variant of the board in which only the nails that can potentially be hit by a ball..

The approximation of a piecewise monotonic function by a polynomial with the same monotonicity. Such comonotonic approximations can always be accomplished with th degree polynomials, and have an error of (Passow and Raymon 1974, Passow et al. 1974, Newman 1979).

French curves are plastic (or wooden) templates having an edge composed of several different curves. French curves are used in drafting (or were before computer-aided design) to draw smooth curves of almost any desired curvature in mechanical drawings. Several typical French curves are illustrated above.While an undergraduate at MIT, Feynman (1997, p. 23) used a French curve to illustrate the fallacy of learning without understanding. When he pointed out to his colleagues in a mechanical drawing class the "amazing" fact that the tangent at the lowest (or highest) point on the curve was horizontal, none of his classmates realized that this was trivially true, since the derivative (tangent) at an extremum (lowest or highest point) of any curve is zero (horizontal), as they had already learned in calculus class...

A linkage with six rods which draws the inverse of a given curve. When a pencil is placed at , the inverse is drawn at (or vice versa). If a seventh rod (dashed) is added (with an additional pivot), is kept on a circle and the locus traced out by is a straight line. It therefore converts circular motion to linear motion without sliding, and was discovered in 1864. Another linkage which performs this feat using hinged squares had been published by Sarrus in 1853 but ignored. Coxeter (1969, p. 428) shows that

A linkage invented in 1630 by Christoph Scheiner for making a scaled copy of a given figure. The linkage is pivoted at ; hinges are denoted . By placing a pencil at (or ), a dilated image is obtained at (or ).

A linkage which draws the inverse of a given curve. It can also convert circular to linear motion. The rods satisfy and , and , , and remain collinear while is kept parallel to . This condition holds automatically if .Coxeter (1969, p. 428) shows that if , then

A mechanical device consisting of a sliding portion and a fixed case, each marked with logarithmic axes. By lining up the ticks, it is possible to do multiplication by taking advantage of the additive property of logarithms. More complicated slide rules also allow the extraction of roots and computation of trigonometric functions.According to Steinhaus (1999, p. 301), the principle of the slide rule was first enumerated by E. Gunter in 1623, and in 1671, S. Partridge constructed an instrument similar to the modern slide rule. The Oughtred Society, a group of slide rule collectors, claims that W. Oughtred invented the first slide rule in 1622.The slide rule was an indispensable tool for scientists and engineers through the 1960s, but the development of the desk calculator (and subsequently pocket calculator) rendered slide rules largely obsolete beginning in the early 1970s...

The Hilbert curve is a Lindenmayer system invented by Hilbert (1891) whose limit is a plane-filling function which fills a square. Traversing the polyhedron vertices of an -dimensional hypercube in Gray code order produces a generator for the -dimensional Hilbert curve. The Hilbert curve can be simply encoded with initial string "L", string rewriting rules "L" -> "+RF-LFL-FR+", "R" -> "-LF+RFR+FL-", and angle (Peitgen and Saupe 1988, p. 278). The th iteration of this Hilbert curve is implemented in the Wolfram Language as HilbertCurve[n].A related curve is the Hilbert II curve, shown above (Peitgen and Saupe 1988, p. 284). It is also a Lindenmayer system and the curve can be encoded with initial string "X", string rewriting rules "X" -> "XFYFX+F+YFXFY-F-XFYFX", "Y" -> "YFXFY-F-XFYFX+F+YFXFY",..

Given a set of control points , , ..., , the corresponding Bézier curve (or Bernstein-Bézier curve) is given bywhere is a Bernstein polynomial and . Bézier splines are implemented in the Wolfram Language as BezierCurve[pts].A "rational" Bézier curve is defined bywhere is the order, are the Bernstein polynomials, are control points, and the weight of is the last ordinate of the homogeneous point . These curves are closed under perspective transformations, and can represent conic sections exactly.The Bézier curve always passes through the first and last control points and lies within the convex hull of the control points. The curve is tangent to and at the endpoints. The "variation diminishing property" of these curves is that no line can have more intersections with a Bézier curve than with the curve obtained by joining consecutive points with straight line segments...

Let denote the independence number of a graph . Then the Shannon capacity , sometimes also denoted , of is defined aswhere denoted the graph strong product (Shannon 1956, Alon and Lubetzky 2006). The Shannon capacity is an important information theoretical parameter because it represents the effective size of an alphabet in a communication model represented by a graph (Alon 1998). satisfiesThe Shannon capacity is in general very difficult to calculate (Brimkov et al. 2000). In fact, the Shannon capacity of the cycle graph was not determined as until 1979 (Lovász 1979), and the Shannon capacity of is perhaps one of the most notorious open problems in extremal combinatorics (Bohman 2003).Lovász (1979) showed that the Shannon capacity of the -Kneser graph is , that of a vertex-transitive self-complementary graph (which includes all Paley graphs) is , and that of the Petersen graph is 4.All graphs whose Shannon capacity is known..

In machine learning theory and artificial intelligence, a concept over a domain is a Boolean function . A collection of concepts is called a concept class.In context-specific applications, concepts are usually thought to assign either a "positive" or "negative" outcome (corresponding to range values of 1 or 0, respectively) to each element of the domain . In that way, concepts are the fundamental component of learning theory.

In order for a band-limited (i.e., one with a zero power spectrum for frequencies ) baseband () signal to be reconstructed fully, it must be sampled at a rate . A signal sampled at is said to be Nyquist sampled, and is called the Nyquist frequency. No information is lost if a signal is sampled at the Nyquist frequency, and no additional information is gained by sampling faster than this rate.

The mutual information between two discrete random variables and is defined to be(1)bits. Additional properties are(2)(3)and(4)where is the entropy of the random variable and is the joint entropy of these variables.

A theorem from information theory that is a simple consequence of the weak law of large numbers. It states that if a set of values , , ..., is drawn independently from a random variable distributed according to , then the joint probability satisfieswhere is the entropy of the random variable .

In statistics, sampling is the selection and implementation of statistical observations in order to estimate properties of an underlying population. Sampling is a vital part of modern polling, market research, and manufacturing, and its proper use is vital in the functioning of modern economies. The portion of a population selected for analysis is known as a sample, and the number of members in the sample is called the sample size.The term "sampling" is also used in signal processing to refer to measurement of a signal at discrete times, usually with the intension of reconstructing the original signal. For infinite-precision sampling of a band-limited signal at the Nyquist frequency, the signal-to-noise ratio after samples is(1)(2)(3)where is the normalized correlation coefficient(4)For ,(5)The identical result is obtained for oversampling. For undersampling, the signal-to-noiseratio decreases (Thompson et al. 1986)...

A game played with two heaps of counters in which a player may take any number from either heap or the same number from both. The player taking the last counter wins. The th safe combination is , where , with the golden ratio and the floor function. It is also true that . The first few safe combinations are (1, 2), (3, 5), (4, 7), (6, 10), ... (OEIS A000201 and A001950), which are the pairs of elements from the complementary Beatty sequences for and (Wells 1986, p. 40).

An equilibrium point in game theory is a set of strategies such that the th payoff function is larger or equal for any other th strategy, i.e.,

A Nash equilibrium of a strategic game is a profile of strategies , where ( is the strategy set of player ), such that for each player , , , where and .Another way to state the Nash equilibrium condition is that solves for each . In words, in a Nash equilibrium, no player has an incentive to deviate from the strategy chosen, since no player can choose a better strategy given the choices of the other players.The Season 1 episode "Dirty Bomb" (2005) of the television crime drama NUMB3RS mentions Nash equilibrium.

The Monty Hall problem is named for its similarity to the Let's Make a Deal television game show hosted by Monty Hall. The problem is stated as follows. Assume that a room is equipped with three doors. Behind two are goats, and behind the third is a shiny new car. You are asked to pick a door, and will win whatever is behind it. Let's say you pick door 1. Before the door is opened, however, someone who knows what's behind the doors (Monty Hall) opens one of the other two doors, revealing a goat, and asks you if you wish to change your selection to the third door (i.e., the door which neither you picked nor he opened). The Monty Hall problem is deciding whether you do.The correct answer is that you do want to switch. If you do not switch, you have the expected 1/3 chance of winning the car, since no matter whether you initially picked the correct door, Monty will show you a door with a goat. But after Monty has eliminated one of the doors for you, you obviously do not improve..

Let Jones and Smith be the only two contestants in an election that will end in a deadlock when all votes for Jones () and Smith () are counted. What is the expectation value of after of a total of votes are counted? The solution is (1)(2)

The fundamental theorem of game theory which states that every finite, zero-sum, two-person game has optimal mixed strategies. It was proved by John von Neumann in 1928.Formally, let and be mixed strategies for players A and B. Let be the payoff matrix. Thenwhere is called the value of the game and and are called the solutions. It also turns out that if there is more than one optimal mixed strategy, there are infinitely many.In the Season 4 opening episode "Trust Metric" (2007) of the television crime drama NUMB3RS, math genius Charlie Eppes mentions that he used the minimax theorem in an attempt to derive an equation describing friendship.

A two-player game, also called crosscram, in which player has horizontal dominoes and player has vertical dominoes. The two players alternately place a domino on a board until the other cannot move, in which case the player having made the last move wins (Gardner 1974, Lachmann et al. 2000). Depending on the dimensions of the board, the winner will be , , 1 (the player making the first move), or 2 (the player making the second move). For example, the board is a win for the first player.Berlekamp (1988) solved the general problem for board for odd . Solutions for the board are summarized in the following table, with a win for for .winwinwin0210120H1V11121H2112H22H311322314H14124H5V15125H6116H26H7117H2718H18128H9V19129HLachmann et al. (2000) have solved the game for widths of , 3, 4, 5, 7, 9, and 11, obtaining the results summarized in the following table for , 1, ....winner32, V, 1, 1, H, H, ...4H for even and all 52, V, H, V, H, 2, H, H, ...7H for 9H for..

The theory of analyzing a decision between a collection of alternatives made by a collection of voters with separate opinions. Any choice for the entire group should reflect the desires of the individual voters to the extent possible.Fair choice procedures usually satisfy anonymity (invariance under permutation of voters), duality (each alternative receives equal weight for a single vote), and monotonicity (a change favorable for does not hurt ). Simple majority vote is anonymous, dual, and monotone. May's theorem states a stronger result.

Conway games were introduced by J. H. Conway in 1976 to provide a formal structure for analyzing games satisfying certain requirements: 1. There are two players, Left and Right ( and ), who move alternately. 2. The first player unable to move loses. 3. Both players have complete information about the state of the game. 4. There is no element of chance. For example, nim is a Conway game, but chess is not (due to the possibility of draws and stalemate). Note that Conway's "game of life" is (somewhat confusingly) not a Conway game.A Conway game is either: 1. The zero game, denoted as 0 or , or 2. An object (an ordered pair) of the form , where and are sets of Conway games. The elements of and are called the Left and Right options respectively, and are the moves available to Left and Right. For example, in the game , if it is 's move, he may move to or , whereas if it is 's move, he has no options and loses immediately.A game in which both players have..

A problem also known as the points problem or unfinished game. Consider a tournament involving players playing the same game repetitively. Each game has a single winner, and denote the number of games won by player at some juncture . The games are independent, and the probability of the th player winning a game is . The tournament is specified to continue until one player has won games. If the tournament is discontinued before any player has won games so that for , ..., , how should the prize money be shared in order to distribute it proportionally to the players' chances of winning?For player , call the number of games left to win the "quota." For two players, let and be the probabilities of winning a single game, and and be the number of games needed for each player to win the tournament. Then the stakes should be divided in the ratio , where(1)(2)(Kraitchik 1942).If players have equal probability of winning ("cell probability"),..

A predictor-corrector method for solution of ordinary differential equations. The third-order equations for predictor and corrector are (1)(2)Abramowitz and Stegun (1972) also give the fifth order equations and formulas involving higher derivatives.

The term isocline derives from the Greek words for "same slope." For a first-order ordinary differential equation is, a curve with equation for some constant is known as an isocline. In other words, all the solutions of the ordinary differential equation intersecting that curve have the same slope . Isoclines can be used as a graphical method of solving an ordinary differential equation.The term is also used to refer to points on maps of the world having identical magnetic inclinations.

A method of determining coefficients in a power series solutionof the ordinary differential equation so that , the result of applying the ordinary differential operator to , is orthogonal to every for , ..., (Itô 1980).Galerkin methods are equally ubiquitous in the solution of partial differential equations, and in fact form the basis for the finite element method.

A method of determining coefficients in an expansionso as to nullify the values of an ordinary differential equation at prescribed points.

Adams' method is a numerical method for solving linear first-orderordinary differential equations of the form(1)Let(2)be the step interval, and consider the Maclaurin series of about ,(3)(4)Here, the derivatives of are given by the backward differences(5)(6)(7)etc. Note that by (◇), is just the value of .For first-order interpolation, the method proceeds by iterating the expression(8)where . The method can then be extended to arbitrary order using the finite difference integration formula from Beyer (1987)(9)to obtain(10)Note that von Kármán and Biot (1940) confusingly use the symbol normally used for forward differences to denote backward differences .

Using a Chebyshev polynomial of the first kind , define(1)(2)Then(3)It is exact for the zeros of . This type of approximation is important because, when truncated, the error is spread smoothly over . The Chebyshev approximation formula is very close to the minimax polynomial.

Let be a reciprocal difference. Then Thiele's interpolation formula is the continued fraction

A piecewise polynomial function that can have a locally very simple form, yet at the same time be globally flexible and smooth. Splines are very useful for modeling arbitrary functions, and are used extensively in computer graphics.Cubic splines are implemented in the Wolfram Language as BSplineCurve[pts, SplineDegree -> 3] (red), Bézier curves as BezierCurve[pts] (blue), and B-splines as BSplineCurve[pts].

The Lagrange interpolating polynomial is the polynomial of degree that passes through the points , , ..., , and is given by(1)where(2)Written explicitly,(3)The formula was first published by Waring (1779), rediscovered by Euler in 1783, and published by Lagrange in 1795 (Jeffreys and Jeffreys 1988).Lagrange interpolating polynomials are implemented in the Wolfram Language as InterpolatingPolynomial[data, var]. They are used, for example, in the construction of Newton-Cotes formulas.When constructing interpolating polynomials, there is a tradeoff between having a better fit and having a smooth well-behaved fitting function. The more data points that are used in the interpolation, the higher the degree of the resulting polynomial, and therefore the greater oscillation it will exhibit between the data points. Therefore, a high-degree interpolation may be a poor predictor of the function between points, although the accuracy..

A bicubic spline is a special case of bicubic interpolation which uses an interpolationfunction of the form(1)(2)(3)(4)where are constants and and are parameters ranging from 0 to 1. For a bicubic spline, however, the partial derivatives at the grid points are determined globally by one-dimensional splines.

The computation of points or values between ones that are known or tabulated using the surrounding points or values.In particular, given a univariate function , interpolation is the process of using known values to find values for at points , . In general, this technique involves the construction of a function called the interpolant which agrees with at the points and which is then used to compute the desired values.Unsurprisingly, one can talk about interpolation methods for multivariate functions as well, though these tend to be substantially more involved than their univariate counterparts.

In univariate interpolation, an interpolant is a function which agrees with a particular function at a set of known points and which is used to compute values for at points , .Modulo a change of notation, the above definition translates verbatim to multivariateinterpolation models as well.Generally speaking, the properties required of the interpolant are the most fundamental designations between various interpolation models. For example, the main difference between the linear and spline interpolation models is that the interpolant of the prior is required merely to be piecewise linear whereas spline interpolants are assumed to be piecewise polynomial and globally smooth.

A nonuniform rational B-spline curve defined bywhere is the order, are the B-spline basis functions, are control points, and the weight of is the last ordinate of the homogeneous point . These curves are closed under perspective transformations and can represent conic sections exactly.

One of the "knots" , ..., of a B-spline with control points , ..., and knot vectorwhere

Let(1)then(2)where is a divided difference, and the remainder is(3)for .

Let be an th degree polynomial with zeros at , ..., . Then the fundamental Hermite interpolating polynomials of the first and second kinds are defined by(1)and(2)for , 2, ..., where the fundamental polynomials of Lagrange interpolation are defined by(3)They are denoted and , respectively, by Szegö (1975, p. 330).These polynomials have the properties(4)(5)(6)(7)for , 2, ..., . Now let , ..., and , ..., be values. Then the expansion(8)gives the unique Hermite interpolating fundamental polynomial for which(9)(10)If , these are called Hermite's interpolating polynomials.The fundamental polynomials satisfy(11)and(12)Also, if is an arbitrary distribution on the interval , then(13)(14)(15)(16)(17)(18)where are Christoffel numbers.

where is a trigonometric polynomial of degree such that for , ..., , and

Generalizes the secant method of root finding byusing quadratic 3-point interpolation(1)Then define(2)(3)(4)and the next iteration is(5)This method can also be used to find complex zerosof analytic functions.

A cubic spline is a spline constructed of piecewise third-order polynomials which pass through a set of control points. The second derivative of each polynomial is commonly set to zero at the endpoints, since this provides a boundary condition that completes the system of equations. This produces a so-called "natural" cubic spline and leads to a simple tridiagonal system which can be solved easily to give the coefficients of the polynomials. However, this choice is not the only one possible, and other boundary conditions can be used instead.Cubic splines are implemented in the Wolfram Language as BSplineCurve[pts, SplineDegree -> 3].Consider 1-dimensional spline for a set of points . Following Bartels et al. (1998, pp. 10-13), let the th piece of the spline be represented by(1)where is a parameter and , ..., . Then(2)(3)Taking the derivative of in each interval then gives(4)(5)Solving (2)-(5) for , , , and then gives(6)(7)(8)(9)Now..

An algorithm similar to Neville's algorithm for constructing the Lagrange interpolating polynomial. Let be the unique polynomial of th polynomial order coinciding with at , ..., . Then (1)(2)(3)(4)

If is a continuous real-valued function on and if any is given, then there exists a polynomial on such thatfor all . In words, any continuous function on a closed and bounded interval can be uniformly approximated on that interval by polynomials to any degree of accuracy.

Let be compact, let be analytic on a neighborhood of , and let contain at least one point from each connected component of . Then for any , there is a rational function with poles in such that(Krantz 1999, p. 143).A polynomial version can be obtained by taking . Let be an analytic function which is regular in the interior of a Jordan curve and continuous in the closed domain bounded by . Then can be approximated with arbitrary accuracy by polynomials (Szegö 1975, p. 5; Krantz 1999, p. 144).

Jackson's theorem is a statement about the error of the best uniform approximation to a real function on by real polynomials of degree at most . Let be of bounded variation in and let and denote the least upper bound of and the total variation of in , respectively. Given the function(1)then the coefficients(2)of its Fourier-Legendre series, where is a Legendre polynomial, satisfy the inequalities(3)Moreover, the Fourier-Legendre series of converges uniformly and absolutely to in .Bernstein (1913) strengthened Jackson's theorem to(4)A specific application of Jackson's theorem shows that if(5)then(6)

Let be a Padé approximant. Then(1)(2)(3)(4)where(5)and is the C-determinant.

Guilloché patterns are spirograph-like curves that frame a curve within an inner and outer envelope curve. They are used on banknotes, securities, and passports worldwide for added security against counterfeiting. For currency, the precise techniques used by the governments of Russia, the United States, Brazil, the European Union, Madagascar, Egypt, and all other countries are likely quite different. The figures above show the same guilloche pattern plotted in polar and Cartesian coordinates generated by a series of nested additions and multiplications of sinusoids of various periods.Guilloché machines (alternately called geometric lathes, rose machines, engine-turners, and cycloidal engines) were first used for a watch casing dated 1624, and consist of myriad gears and settings that can produce many different patterns. Many goldsmiths, including Fabergè, employed guilloché machines.The..

Napier's bones, also called Napier's rods, are numbered rods which can be used to perform multiplication of any number by a number 2-9. By placing "bones" corresponding to the multiplier on the left side and the bones corresponding to the digits of the multiplicand next to it to the right, and product can be read off simply by adding pairs of numbers (with appropriate carries as needed) in the row determined by the multiplier. This process was published by Napier in 1617 an a book titled Rabdologia, so the process is also called rabdology.There are ten bones corresponding to the digits 0-9, and a special eleventh bone that is used the represent the multiplier. The multiplier bone is simply a list of the digits 1-9 arranged vertically downward. The remainder of the bones each have a digit written in the top square, with the multiplication table for that digits written downward, with the digits split by a diagonal line going from the lower left..

The thin plate spline is the two-dimensional analog of the cubic spline in one dimension. It is the fundamental solution to the biharmonic equation, and has the formGiven a set of data points, a weighted combination of thin plate splines centered about each data point gives the interpolation function that passes through the points exactly while minimizing the so-called "bending energy." Bending energy is defined here as the integral over of the squares of the second derivatives,Regularization may be used to relax the requirement that the interpolant pass through the data points exactly.The name "thin plate spline" refers to a physical analogy involving the bending of a thin sheet of metal. In the physical setting, the deflection is in the direction, orthogonal to the plane. In order to apply this idea to the problem of coordinate transformation, one interprets the lifting of the plate as a displacement of the or coordinates..

Let be a function and let , and define the cardinal series of with respect to the interval as the formal serieswhere is the sinc function. If this series converges, it is known as the cardinal function (or Whittaker cardinal function) of , denoted (McNamee et al. 1971).

A nonuniform rational B-spline surface of degree is defined bywhere and are the B-spline basis functions, are control points, and the weight of is the last ordinate of the homogeneous point .NURBS surfaces are implemented in the WolframLanguage as BSplineSurface[array].

Given a sequence , an -moving average is a new sequence defined from the by taking the arithmetic mean of subsequences of terms,(1)So the sequences giving -moving averages are(2)(3)and so on. The plot above shows the 2- (red), 4- (yellow), 6- (green), and 8- (blue) moving averages for a set of 100 data points.Moving averages are implemented in the Wolfram Language as MovingAverage[data, n].

Gregory's formula is a formula that allows a definite integral of a function to be expressed by its sum and differences, or its sum by its integral and difference (Jordan 1965, p. 284). It is given by the equationdiscovered by Gregory in 1670 and reported to be the earliest formula in numericalintegration (Jordan 1965, Roman 1984).

The curlicue fractal is a figure obtained by the following procedure. Let be an irrational number. Begin with a line segment of unit length, which makes an angle to the horizontal. Then define iteratively bywith . To the end of the previous line segment, draw a line segment of unit length which makes an angleto the horizontal (Pickover 1995ab). The result is a fractal, and the above figures correspond to the curlicue fractals with points for the golden ratio , , , , the Euler-Mascheroni constant , , and the Feigenbaum constant .The temperature of these curves is given in the followingtable.constanttemperaturegolden ratio 46515858Euler-Mascheroni constant 6390Feigenbaum constant 92

A finite set of contraction maps for , 2, ..., , each with a contractivity factor , which map a compact metric space onto itself. It is the basis for fractal image compression techniques.

A puzzle involving disentangling a set of rings from a looped double rod, originally used by French peasants to lock chests (Steinhaus 1999). The word "baguenaudier" means "time-waster" in French, and the puzzle is also called the Chinese rings or Devil's needle puzzle. ("Bague" also means "ring," but this appears to be an etymological coincidence. Interestingly, the bladder-senna tree is also known as "baguenaudier" in French.) Culin (1965) attributes the puzzle to Chinese general Hung Ming (A.D. 181-234), who gave it to his wife as a present to occupy her while he was away at the wars.The solution of the baguenaudier is intimately related to the theory of Graycodes.The minimum number of moves needed for rings is(1)(2)where is the ceiling function, giving 1, 2, 5, 10, 21, 42, 85, 170, 341, 682, ... (OEIS A000975). The generating function for these numbers is(3)They are also given by..

In accounting, the principal is the original amount borrowed or lent on which interest is then paid or given.The word "principal" is also used in many areas of mathematics to denote one particular object chosen from many possible ones. For example, the principal value of a multivalued function is the canonical value chosen to associate with that functions for convenient or by convention, e.g., 1 for (even though ).

The Remez algorithm (Remez 1934), also called the Remez exchange algorithm, is an application of the Chebyshev alternation theorem that constructs the polynomial of best approximation to certain functions under a number of conditions. The Remez algorithm in effect goes a step beyond the minimax approximation algorithm to give a slightly finer solution to an approximation problem.Parks and McClellan (1972) observed that a filter of a given length with minimal ripple would have a response with the same relationship to the ideal filter that a polynomial of degree of best approximation has to a certain function, and so the Remez algorithm could be used to generate the coefficients.In this application, the algorithm is an iterative procedure consisting of two steps. One step is the determination of candidate filter coefficients from candidate "alternation frequencies," which involves solving a set of linear equations. The other..

Given a set of nonnegative integers, the number partitioning problem requires the division of into two subsets such that the sums of number in each subset are as close as possible. This problem is known to be NP-complete, but is perhaps "the easiest hard problem" (Hayes 2002; Borwein and Bailey 2003, pp. 38-39).

A branch of mathematics which encompasses many diverse areas of minimization and optimization. Optimization theory is the more modern term for operations research. Optimization theory includes the calculus of variations, control theory, convex optimization theory, decision theory, game theory, linear programming, Markov chains, network analysis, optimization theory, queuing systems, etc.

A generalization of simple majority voting in which a list of quotas specifies, according to the number of votes, how many votes an alternative needs to win (Taylor 1995). The quota system declares a tie unless for some , there are exactly tie votes in the profile and one of the alternatives has at least votes, in which case the alternative is the choice.Let be the number of quota systems for voters and the number of quota systems for which , so(1)where is the floor function. This produces the sequence of central binomial coefficients 1, 2, 3, 6, 10, 20, 35, 70, 126, ... (OEIS A001405). It may be defined recursively by and(2)where is a Catalan number (Young et al. 1995). The function satisfies(3)for (Young et al. 1995). satisfies the quota rule.

The solution to a game in gametheory. When a game saddle point is presentand is the value for pure strategies.

A problem in game theory first discussed by A. Tucker. Suppose each of two prisoners and , who are not allowed to communicate with each other, is offered to be set free if he implicates the other. If neither implicates the other, both will receive the usual sentence. However, if the prisoners implicate each other, then both are presumed guilty and granted harsh sentences.A dilemma arises in deciding the best course of action in the absence of knowledge of the other prisoner's decision. Each prisoner's best strategy would appear to be to turn the other in (since if makes the worst-case assumption that will turn him in, then will walk free and will be stuck in jail if he remains silent). However, if the prisoners turn each other in, they obtain the worst possible outcome for both.Mosteller (1987) describes a different problem he terms "the prisoner's dilemma." In this problem, three prisoners , , and with apparently equally good records..

A Poisson process is a process satisfying the following properties: 1. The numbers of changes in nonoverlapping intervals are independent for all intervals. 2. The probability of exactly one change in a sufficiently small interval is , where is the probability of one change and is the number of trials. 3. The probability of two or more changes in a sufficiently small interval is essentially 0. In the limit of the number of trials becoming large, the resulting distribution iscalled a Poisson distribution.

A root-finding algorithm also known as the tangent hyperbolas method or Halley's rational formula. As in Halley's irrational formula, take the second-order Taylor series(1)A root of satisfies , so(2)Now write(3)giving(4)Using the result from Newton's method,(5)gives(6)so the iteration function is(7)This satisfies where is a root, so it is third order for simple zeros. Curiously, the third derivative(8)is the Schwarzian derivative. Halley's method may also be derived by applying Newton's method to . It may also be derived by using an osculating curve of the form(9)Taking derivatives,(10)(11)(12)which has solutions(13)(14)(15)so at a root, and(16)which is Halley's method.

An algorithm for finding roots which retains that prior estimate for which the function value has opposite sign from the function value at the current best estimate of the root. In this way, the method of false position keeps the root bracketed (Press et al. 1992).Using the two-point form of the linewith , using , and solving for therefore gives the iteration

A root-finding algorithm which makes useof a third-order Taylor series(1)A root of satisfies , so(2)Using the quadratic equation then gives(3)Picking the plus sign gives the iteration function(4)This equation can be used as a starting point for deriving Halley'smethod.If the alternate form of the quadratic equationis used instead in solving (◇), the iteration function becomes instead(5)This form can also be derived by setting in Laguerre's method. Numerically, the sign in the denominator is chosen to maximize its absolute value. Note that in the above equation, if , then Newton's method is recovered. This form of Halley's irrational formula has cubic convergence, and is usually found to be substantially more stable than Newton's method. However, it does run into difficulty when both and or and are simultaneously near zero...

Given a function , write and define the Sturm functions by(1)where is a polynomial quotient. Then construct the following chain of Sturm functions,(2)(3)(4)(5)(6)known as a Sturm chain. The chain is terminated when a constant is obtained.Sturm functions provide a convenient way for finding the number of real roots of an algebraic equation with real coefficients over a given interval. Specifically, the difference in the number of sign changes between the Sturm functions evaluated at two points and gives the number of real roots in the interval . This powerful result is known as the Sturm theorem. However, when the method is applied numerically, care must be taken when computing the polynomial quotients to avoid spurious results due to roundoff error.As a specific application of Sturm functions toward finding polynomial roots, consider the function , plotted above, which has roots , , , and 1.38879 (three of which are real). The derivative..

A method for finding roots which defines(1)so the derivative is(2)One step of Newton's method can then be writtenas(3)

A root-finding method which was among the most popular methods for finding roots of univariate polynomials in the 19th and 20th centuries. It was invented independently by Graeffe, Dandelin, and Lobachevsky (Householder 1959, Malajovich and Zubelli 2001). Graeffe's method has a number of drawbacks, among which are that its usual formulation leads to exponents exceeding the maximum allowed by floating-point arithmetic and also that it can map well-conditioned polynomials into ill-conditioned ones. However, these limitations are avoided in an efficient implementation by Malajovich and Zubelli (2001).The method proceeds by multiplying a polynomial by and noting that(1)(2)so the result is(3)repeat times, then write this in the form(4)where . Since the coefficients are given by Vieta's formulas(5)(6)(7)and since the squaring procedure has separated the roots, the first term is larger than rest. Therefore,(8)(9)(10)giving(11)(12)(13)Solving..

For(1)polynomial of degree , the Schur transform is defined by the -degree polynomial(2)(3)where is the reciprocal polynomial.

A root-finding algorithm also called Bailey's method and Hutton's method. For a function of the form , Lambert's method gives an iteration functionso

A root-finding algorithm used in LU decomposition. It solves the equationsfor the unknowns and .

A root-finding algorithm which converges to a complex root from any starting position. To motivate the formula, consider an th order polynomial and its derivatives,(1)(2)(3)(4)Now consider the logarithm and logarithmic derivatives of (5)(6)(7)(8)(9)(10)Now make "a rather drastic set of assumptions" that the root being sought is a distance from the current best guess, so(11)while all other roots are at the same distance , so(12)for , 3, ..., (Acton 1990; Press et al. 1992, p. 365). This allows and to be expressed in terms of and as(13)(14)Solving these simultaneously for gives(15)where the sign is taken to give the largest magnitude for the denominator.To apply the method, calculate for a trial value , then use as the next trial value, and iterate until becomes sufficiently small. For example, for the polynomial with starting point , the algorithmic converges to the real root very quickly as (, , ).Setting gives Halley's..

Two families of equations used to find roots of nonlinear functions of a single variable. The "B" family is more robust and can be used in the neighborhood of degenerate multiple roots while still providing a guaranteed convergence rate. Almost all other root-finding methods can be considered as special cases of Schröder's method. Householder humorously claimed that papers on root-finding could be evaluated quickly by looking for a citation of Schröder's paper; if the reference were missing, the paper probably consisted of a rediscovery of a result due to Schröder (Stewart 1993).One version of the "A" method is obtained by applying Newton's method to ,(Scavo and Thoo 1995).

Brent's method is a root-finding algorithm which combines root bracketing, bisection, and inverse quadratic interpolation. It is sometimes known as the van Wijngaarden-Deker-Brent method. Brent's method is implemented in the Wolfram Language as the undocumented option Method -> Brent in FindRoot[eqn, x, x0, x1].Brent's method uses a Lagrange interpolating polynomial of degree 2. Brent (1973) claims that this method will always converge as long as the values of the function are computable within a given region containing a root. Given three points , , and , Brent's method fits as a quadratic function of , then uses the interpolation formula(1)Subsequent root estimates are obtained by setting , giving(2)where(3)(4)with(5)(6)(7)(Press et al. 1992).

The substitution of for in a polynomial . is then plotted as a function of for a given in the complex plane. By varying so that the curve passes through the origin, it is possible to determine a value for one root of the polynomial.

Bisection is the division of a given curve, figure, or interval into two equal parts (halves).A simple bisection procedure for iteratively converging on a solution which is known to lie inside some interval proceeds by evaluating the function in question at the midpoint of the original interval and testing to see in which of the subintervals or the solution lies. The procedure is then repeated with the new interval as often as needed to locate the solution to the desired accuracy.Let and be the endpoints at the th iteration (with and ) and let be the th approximate solution. Then the number of iterations required to obtain an error smaller than is found by noting that(1)and that is defined by(2)In order for the error to be smaller than ,(3)Taking the natural logarithm of both sides thengives(4)so(5)..

A procedure for finding the quadratic factors for the complex conjugate roots of a polynomial with real coefficients.(1)Now write the original polynomial as (2)(3)(4)(5)(6)(7)(8)Now use the two-dimensional Newton's method tofind the simultaneous solutions.

A method for finding roots of a polynomial equation . Now find an equation whose roots are the roots of this equation diminished by , so(1)The expressions for , , ... are then found as in the following example, where(2)Write the coefficients , , ..., in a horizontal row, and let a new letter shown as a denominator stand for the sum immediately above it so, in the following example, . The result is the following table.Solving for the quantities , , , , and gives(3)(4)(5)(6)(7)so the equation whose roots are the roots of , each diminished by , is(8)(Whittaker and Robinson 1967).To apply the procedure, first determine the integer part of the root through whatever means are needed, then reduce the equation by this amount. This gives the second digit, by which the equation is once again reduced (after suitable multiplication by 10) to find the third digit, and so on.To see the method applied, consider the problem of finding the smallest positive root of(9)This..

A theory of constructing initial conditions that provides safe convergence of a numerical root-finding algorithm for an equation . Point estimation theory treats convergence conditions and the domain of convergence using only information about at the initial point (Petković et al. 1997, p. 1). An initial point that provides safe convergence of Newton's method is called an approximate zero.Point estimation theory should not be confused with pointestimators of probability theory.

Wynn's -method is a method for numerical evaluation of sums and products that samples a number of additional terms in the series and then tries to extrapolate them by fitting them to a polynomial multiplied by a decaying exponential.In particular, the method provides an efficient algorithm for implementing transformations of the form(1)where(2)is the th partial sum of a sequence , which are useful for yielding series convergence improvement (Hamming 1986, p. 205). In particular, letting , , and(3)for , 2, ... (correcting the typo of Hamming 1986, p. 206). The values of are there equivalent to the results of applying transformations to the sequence (Hamming 1986, p. 206).Wynn's epsilon method can be applied to the terms of a series using the Wolfram Language command SequenceLimit[l]. Wynn's method may also be invoked in numerical summation and multiplication using Method -> Fit in the Wolfram Language's NSum and NProduct..

The improvement of the convergence properties of a series, also called convergence acceleration or accelerated convergence, such that a series reaches its limit to within some accuracy with fewer terms than required before. Convergence improvement can be effected by forming a linear combination with a series whose sum is known. Useful sums include(1)(2)(3)(4)Kummer's transformation takes a convergent series(5)and another convergent series(6)with known such that(7)Then a series with more rapid convergence to the same value is given by(8)(Abramowitz and Stegun 1972).The Euler transform takes a convergent alternatingseries(9)into a series with more rapid convergence to the same value to(10)where(11)(Abramowitz and Stegun 1972; Beeler et al. 1972).A general technique that can be used to acceleration converge of series is to expand them in a Taylor series about infinity and interchange the order of summation. In cases where a symbolic..

The downward Clenshaw recurrence formula evaluates a sum of products of indexed coefficients by functions which obey a recurrence relation. If(1)and(2)where the s are known, then define(3)(4)for and solve backwards to obtain and .(5)(6)(7)(8)(9)(10)The upward Clenshaw recurrence formula is(11)(12)for .(13)

Let the values of a function be tabulated at points equally spaced by , so , , ..., . Then Woolhouse's formulas approximating the integral of are given by the Newton-Cotes-like formulas(1)(2)

Numerical integration is the approximate computation of an integral using numerical techniques. The numerical computation of an integral is sometimes called quadrature. Ueberhuber (1997, p. 71) uses the word "quadrature" to mean numerical computation of a univariate integral, and "cubature" to mean numerical computation of a multiple integral.There are a wide range of methods available for numerical integration. A good source for such techniques is Press et al. (1992). Numerical integration is implemented in the Wolfram Language as NIntegrate[f, x, xmin, xmax].The most straightforward numerical integration technique uses the Newton-Cotes formulas (also called quadrature formulas), which approximate a function tabulated at a sequence of regularly spaced intervals by various degree polynomials. If the endpoints are tabulated, then the 2- and 3-point formulas are called the trapezoidal rule and..

Let the values of a function be tabulated at points equally spaced by , so , , .... Then Weddle's rule approximating the integral of is given by the Newton-Cotes-like formula

A formula for numerical integration,(1)where(2)(3)(4)(5)(6)(7)and the remainder term is(8)

In order to integrate a function over a complicated domain , Monte Carlo integration picks random points over some simple domain which is a superset of , checks whether each point is within , and estimates the area of (volume, -dimensional content, etc.) as the area of multiplied by the fraction of points falling within . Monte Carlo integration is implemented in the Wolfram Language as NIntegrate[f, ..., Method -> MonteCarlo].Picking randomly distributed points , , ..., in a multidimensional volume to determine the integral of a function in this volume gives a result(1)where(2)(3)(Press et al. 1992, p. 295).

Let the values of a function be tabulated at points equally spaced by , so , , ..., . Then Durand's rule approximating the integral of is given by the Newton-Cotes-like formula

The 2-point Newton-Cotes formulawhere , is the separation between the points, and is a point satisfying . Picking to maximize gives an upper bound for the error in the trapezoidal approximation to the integral.

Also called Radau quadrature (Chandrasekhar 1960). A Gaussian quadrature with weighting function in which the endpoints of the interval are included in a total of abscissas, giving free abscissas. Abscissas are symmetrical about the origin, and the general formula is(1)The free abscissas for , ..., are the roots of the polynomial , where is a Legendre polynomial. The weights of the free abscissas are(2)(3)and of the endpoints are(4)The error term is given by(5)for . Beyer (1987) gives a table of parameters up to and Chandrasekhar (1960) up to (although Chandrasekhar's for is incorrect).300.000001.3333330.33333340.8333330.166667500.0000000.7111110.5444440.10000060.5548580.3784750.066667

Ueberhuber (1997, p. 71) and Krommer and Ueberhuber (1998, pp. 49 and 155-165) use the word "quadrature" to mean numerical computation of a univariate integral, and "cubature" to mean numerical computation of a multiple integral.Cubature techniques available in the Wolfram Language include Monte Carlo integration, implemented as NIntegrate[f, ..., Method -> MonteCarlo] or NIntegrate[f, ..., Method -> QuasiMonteCarlo], and the adaptive Genz-Malik algorithm, implemented as NIntegrate[f, ..., Method -> MultiDimensional].

Simpson's rule is a Newton-Cotes formula for approximating the integral of a function using quadratic polynomials (i.e., parabolic arcs instead of the straight line segments used in the trapezoidal rule). Simpson's rule can be derived by integrating a third-order Lagrange interpolating polynomial fit to the function at three equally spaced points. In particular, let the function be tabulated at points , , and equally spaced by distance , and denote . Then Simpson's rule states that(1)(2)Since it uses quadratic polynomials to approximate functions, Simpson's rule actually gives exact results when approximating integrals of polynomials up to cubic degree.For example, consider (black curve) on the interval , so that , , and . Then Simpson's rule (which corresponds to the area under the blue curve obtained from the third-order interpolating polynomial) gives(3)(4)(5)whereas the trapezoidal rule (area under the red curve) gives and the..

Let the values of a function be tabulated at points equally spaced by , so , , ..., . Then Simpson's 3/8 rule approximating the integral of is given by the Newton-Cotes-like formula

One of the quantities appearing in the Gauss-Jacobi mechanical quadrature. They satisfy(1)(2)and are given by(3)(4)(5)(6)where is the higher coefficient of .

Let the values of a function be tabulated at points equally spaced by , so , , ..., . Then Shovelton's rule approximating the integral of is given by the Newton-Cotes-like formula

Let the values of a function be tabulated at points equally spaced by , so , , ..., . Then Hardy's rule approximating the integral of is given by the Newton-Cotes-like formula

A Gaussian quadrature-like formula for numerical estimation of integrals. It uses weighting function in the interval and forces all the weights to be equal. The general formula is(1)where the abscissas are found by taking terms up to in the Maclaurin series of(2)and then defining(3)The roots of then give the abscissas. The first few values are(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(OEIS A002680 and A101270).Because the roots are all real for and only (Hildebrand 1956), these are the only permissible orders for Chebyshev quadrature. The error term is(14)where(15)The first few values of are 2/3, 8/45, 1/15, 32/945, 13/756, and 16/1575 (Hildebrand 1956). Beyer (1987) gives abscissas up to and Hildebrand (1956) up to .23045067090The abscissas and weights can be computed analytically for small .230450..

A Gaussian quadrature-like formula for numerical estimation of integrals. It requires points and fits all polynomials to degree , so it effectively fits exactly all polynomials of degree . It uses a weighting function in which the endpoint in the interval is included in a total of abscissas, giving free abscissas. The general formula is(1)The free abscissas for , ..., are the roots of the polynomial(2)where is a Legendre polynomial. The weights of the free abscissas are(3)(4)and of the endpoint(5)The error term is given by(6)for .20.50.3333331.530.2222221.024970.6898980.75280640.1250.6576890.1810660.7763870.8228240.44092450.080.4462080.6236530.4463140.5627120.8857920.287427The abscissas and weights can be computed analytically for small .23

Seeks to obtain the best numerical estimate of an integral by picking optimal abscissas at which to evaluate the function . The fundamental theorem of Gaussian quadrature states that the optimal abscissas of the -point Gaussian quadrature formulas are precisely the roots of the orthogonal polynomial for the same interval and weighting function. Gaussian quadrature is optimal because it fits all polynomials up to degree exactly. Slightly less optimal fits are obtained from Radau quadrature and Laguerre-Gauss quadrature.interval are roots of1To determine the weights corresponding to the Gaussian abscissas , compute a Lagrange interpolating polynomial for by letting(1)(where Chandrasekhar 1967 uses instead of ), so(2)Then fitting a Lagrange interpolating polynomial through the points gives(3)for arbitrary points . We are therefore looking for a set of points and weights such that for a weighting function ,(4)(5)with weight(6)The..

Let the values of a function be tabulated at points equally spaced by , so , , ..., . Then Boole's rule approximating the integral of is given by the Newton-Cotes-like formulaThis formula is frequently and mistakenly known as Bode's rule (Abramowitz and Stegun 1972, p. 886) as a result of a typo in an early reference, but is actually due to Boole (Boole and Moulton 1960).

Summation by parts for discrete variables is the equivalent of integrationby parts for continuous variables(1)or(2)where is the indefinite summation operator and the -operator is defined by(3)where is any constant.

Formulas obtained from differentiating Newton'sforward difference formula,where is a binomial coefficient, and . Abramowitz and Stegun (1972) and Beyer (1987) give derivatives in terms of and derivatives in terms of and .

The divided difference , sometimes also denoted (Abramowitz and Stegun 1972), on points , , ..., of a function is defined by and(1)for . The first few differences are(2)(3)(4)Defining(5)and taking the derivative(6)gives the identity(7)Consider the following question: does the property(8)for and a given function guarantee that is a polynomial of degree ? Aczél (1985) showed that the answer is "yes" for , and Bailey (1992) showed it to be true for with differentiable . Schwaiger (1994) and Andersen (1996) subsequently showed the answer to be "yes" for all with restrictions on or .

(1)for , where is the central difference and(2)(3)with a binomial coefficient.

If, after constructing a difference table, no clear pattern emerges, turn the paper through an angle of and compute a new table. If necessary, repeat the process. Each rotation reduces powers by 1, so the sequence multiplied by any polynomial in is reduced to 0s by a -fold difference fan.Call Jackson's difference fan sequence transform the -transform, and define as the -th -transform of the sequence , where and are complex numbers. This is denotedWhen , this is known as the binomial transform of the sequence. Greater values of give greater depths of this fanning process.The inverse -transform of the sequence is given byWhen , this gives the inverse binomial transform of .

(1)for , where is the central difference and(2)(3)(4)(5)where is a binomial coefficient.

It gives the slope of the secant line passing through and . In the limit , the difference quotient becomes the partial derivative

The reciprocal differences are closely related to the divideddifference. The first few are explicitly given by(1)(2)(3)(4)

Gauss's forward formula is(1)for , where is the central difference and(2)(3)where is a binomial coefficient.

This is sometimes knows as the "bars and stars" method. Suppose a recipe called for 5 pinches of spice, out of 9 spices. Each possibility is an arrangement of 5 spices (stars) and 9 dividers between categories (bars). The number of possibilities is . means you use spices 1, 1, 5, 6, and 9.(1)for , where is the central difference and(2)(3)where is a binomial coefficient.

Clairaut's difference equation is a special case of Lagrange's equation (Sokolnikoff and Redheffer 1958) defined by(1)or in " notation,"(2)(Spiegel 1970). It is so named by analogy with Clairaut'sdifferential equation(3)

The forward difference is a finite differencedefined by(1)Higher order differences are obtained by repeated operations of the forward difference operator,(2)so(3)(4)(5)(6)(7)In general,(8)where is a binomial coefficient (Sloane and Plouffe 1995, p. 10).The forward finite difference is implemented in the Wolfram Language as DifferenceDelta[f, i].Newton's forward difference formula expresses as the sum of the th forward differences(9)where is the first th difference computed from the difference table. Furthermore, if the differences , , , ..., are known for some fixed value of , then a formula for the th term is given by(10)(Sloane and Plouffe 1985, p. 10).

The central difference for a function tabulated at equal intervals is defined by(1)First and higher order central differences arranged so as to involve integer indices are then given by(2)(3)(4)(5)(6)(7)(Abramowitz and Stegun 1972, p. 877).Higher order differences may be computed for evenand odd powers,(8)(9)(Abramowitz and Stegun 1972, p. 877).

Newton's forward difference formula is a finite difference identity giving an interpolated value between tabulated points in terms of the first value and the powers of the forward difference . For , the formula states(1)When written in the form(2)with the falling factorial, the formula looks suspiciously like a finite analog of a Taylor series expansion. This correspondence was one of the motivating forces for the development of umbral calculus.An alternate form of this equation using binomial coefficients is(3)where the binomial coefficient represents a polynomial of degree in .The derivative of Newton's forward difference formulagives Markoff's formulas.

The finite difference is the discrete analog of the derivative. The finite forward difference of a function is defined as(1)and the finite backward difference as(2)The forward finite difference is implemented in the Wolfram Language as DifferenceDelta[f, i].If the values are tabulated at spacings , then the notation(3)is used. The th forward difference would then be written as , and similarly, the th backward difference as .However, when is viewed as a discretization of the continuous function , then the finite difference is sometimes written(4)(5)where denotes convolution and is the odd impulse pair. The finite difference operator can therefore be written(6)An th power has a constant th finite difference. For example, take and make a difference table,(7)The column is the constant 6.Finite difference formulas can be very useful for extrapolating a finite amount of data in an attempt to find the general term. Specifically, if a function..

An interpolation formula, sometimes known as theNewton-Bessel formula, given by(1)for , where is the central difference and(2)(3)(4)(5)(6)(7)(8)(9)where are the coefficients from Gauss's backward formula and Gauss's forward formula and and are the coefficients from Everett's formula. The s also satisfy(10)(11)for(12)

(1)for , where is the central difference and(2)(3)(4)(5)where are the coefficients from Gauss's backward formula and Gauss's forward formula and are the coefficients from Bessel's finite difference formula. The s and s also satisfy(6)(7)for(8)

The backward difference is a finite differencedefined by(1)Higher order differences are obtained by repeated operations of the backward difference operator, so(2)(3)(4)In general,(5)where is a binomial coefficient.The backward finite difference are implemented in the Wolfram Language as DifferenceDelta[f, i].Newton's backward difference formula expresses as the sum of the th backward differences(6)where is the first th difference computed from the difference table.

A graphical plot with abscissa given by the number of consecutive numbers constituting a single period and ordinate given by the correlation ratio . The equation of the periodogram iswhere each of the terms of the sequence consists of a simple periodic part of period , together with a part which does not involve this periodicity , so is the standard deviation of the s, is the standard deviation of the s, and is the number of periods covered by the observations.

The finite volume method is a numerical method for solving partial differential equations that calculates the values of the conserved variables averaged across the volume. One advantage of the finite volume method over finite difference methods is that it does not require a structured mesh (although a structured mesh can also be used). Furthermore, the finite volume method is preferable to other methods as a result of the fact that boundary conditions can be applied noninvasively. This is true because the values of the conserved variables are located within the volume element, and not at nodes or surfaces. Finite volume methods are especially powerful on coarse nonuniform grids and in calculations where the mesh moves to track interfaces or shocks.Hyman et al. (1992) have derived local, accurate, reliable, and efficient finite volume methods that mimic symmetry, conservation, stability, and the duality relationships between the gradient,..

Consider a network of resistors so that may be connected in series or parallel with , may be connected in series or parallel with the network consisting of and , and so on. The resistance of two resistors in series is given by(1)and of two resistors in parallel by(2)The possible values for two resistors with resistances and are therefore(3)for three resistances , , and are(4)and so on. These are obviously all rational numbers, and the numbers of distinct arrangements for , 2, ..., are 1, 2, 8, 46, 332, 2874, ... (OEIS A005840), which also arises in a completely different context (Stanley 1991).If the values are restricted to , then there are possible resistances for 1- resistors, ranging from a minimum of to a maximum of . Amazingly, the largest denominators for , 2, ... are 1, 2, 3, 5, 8, 13, 21, ..., which are immediately recognizable as the Fibonacci numbers (OEIS A000045). The following table gives the values possible for small .possible resistances11234If..

Let be the resistance distance matrix of a connected graph on nodes. Then Foster's theorems state thatwhere is the edge set of , andwhere the latter sum runs over all pairs of adjacent edges and is the vertex degree of the vertex common to those edges (Palacios 2001).

A fractal curve created from the base curve and motifillustrated above (Lauwerier 1991, p. 37).As illustrated above, the number of segments after the th iteration is(1)and the length of each segment is given by(2)so the capacity dimension is(3)(4)(5)(6)(Mandelbrot 1983, p. 48).The term Minkowski sausage is also used to refer to the Minkowskicover of a curve.

A compact set with areacreated by punching a square hole of length in the center of a square. In each of the eight squares remaining, punch out another hole of length , and so on.

The Menger sponge is a fractal which is the three-dimensionalanalog of the Sierpiński carpet. The th iteration of the Menger sponge is implemented in the Wolfram Language as MengerMesh[n, 3].Let be the number of filled boxes, the length of a side of a hole, and the fractional volume after the th iteration, then(1)(2)(3)The capacity dimension is therefore(4)(5)(6)(7)(OEIS A102447).The Menger sponge, in addition to being a fractal, is also a super-object for all compact one-dimensional objects, i.e., the topological equivalent of all one-dimensional objects can be found in a Menger sponge (Peitgen et al. 1992).The image above shows a metal print of the Menger sponge created by digital sculptorBathsheba Grossman (https://www.bathsheba.com/).

The tetrix is the three-dimensional analog of the Sierpiński sieve illustrated above, also called the Sierpiński sponge or Sierpiński tetrahedron.The th iteration of the tetrix is implemented in the Wolfram Language as SierpinskiMesh[n, 3].Let be the number of tetrahedra, the length of a side, and the fractional volume of tetrahedra after the th iteration. Then(1)(2)(3)The capacity dimension is therefore(4)(5)so the tetrix has an integer capacity dimension (which is one less than the dimension of the three-dimensional tetrahedra from which it is built), despite the fact that it is a fractal.The following illustrations demonstrate how the dimension of the tetrix can be the same as that of the plane by showing three stages of the rotation of a tetrix, viewed along one of its edges. In the last frame, the tetrix "looks" like the two-dimensional plane. ..

A curve on which points of a map (such as the Mandelbrot set) diverge to a given value at the same rate. A common method of obtaining lemniscates is to define an integer called the count which is the largest such that where is usually taken as . Successive counts then define a series of lemniscates, which are called equipotential curves by Peitgen and Saupe (1988).

The term Mandelbrot set is used to refer both to a general class of fractal sets and to a particular instance of such a set. In general, a Mandelbrot set marks the set of points in the complex plane such that the corresponding Julia set is connected and not computable."The" Mandelbrot set is the set obtained from the quadraticrecurrence equation(1)with , where points in the complex plane for which the orbit of does not tend to infinity are in the set. Setting equal to any point in the set that is not a periodic point gives the same result. The Mandelbrot set was originally called a molecule by Mandelbrot. J. Hubbard and A. Douady proved that the Mandelbrot set is connected.A plot of the Mandelbrot set is shown above in which values of in the complex plane are colored according to the number of steps required to reach . The kidney bean-shaped portion of the Mandelbrot set turns out to be bordered by a cardioid with equations(2)(3)The..

A root-finding algorithm which assumes a function to be approximately linear in the region of interest. Each improvement is taken as the point where the approximating line crosses the axis. The secant method retains only the most recent estimate, so the root does not necessarily remain bracketed. The secant method is implemented in the Wolfram Language as the undocumented option Method -> Secant in FindRoot[eqn, x, x0, x1].When the algorithm does converge, its order of convergenceis(1)where is a constant and is the golden ratio.(2)(3)(4)so(5)The secant method can be implemented in the WolframLanguage as SecantMethodList[f_, {x_, x0_, x1_}, n_] := NestList[Last[#] - {0, (Function[x, f][Last[#]]* Subtract @@ #)/Subtract @@ Function[x, f] /@ #}&, {x0, x1}, n]

The Feigenbaum constant is a universal constant for functions approaching chaos via period doubling. It was discovered by Feigenbaum in 1975 (Feigenbaum 1979) while studying the fixed points of the iterated function(1)and characterizes the geometric approach of the bifurcation parameter to its limiting value as the parameter is increased for fixed . The plot above is made by iterating equation (1) with several hundred times for a series of discrete but closely spaced values of , discarding the first hundred or so points before the iteration has settled down to its fixed points, and then plotting the points remaining.A similar plot that more directly shows the cycle may be constructed by plotting as a function of . The plot above (Trott, pers. comm.) shows the resulting curves for , 2, and 4.Let be the point at which a period -cycle appears, and denote the converged value by . Assuming geometric convergence, the difference between this value and..

Newton's method, also called the Newton-Raphson method, is a root-finding algorithm that uses the first few terms of the Taylor series of a function in the vicinity of a suspected root. Newton's method is sometimes also known as Newton's iteration, although in this work the latter term is reserved to the application of Newton's method for computing square roots.For a polynomial, Newton's method is essentially the same as Horner's method.The Taylor series of about the point is given by(1)Keeping terms only to first order,(2)Equation (2) is the equation of the tangent line to the curve at , so is the place where that tangent line intersects the -axis. A graph can therefore give a good intuitive idea of why Newton's method works at a well-chosen starting point and why it might diverge with a poorly-chosen starting point.This expression above can be used to estimate the amount of offset needed to land closer to the root starting from an initial guess..

Game theory is a branch of mathematics that deals with the analysis of games (i.e., situations involving parties with conflicting interests). In addition to the mathematical elegance and complete "solution" which is possible for simple games, the principles of game theory also find applications to complicated games such as cards, checkers, and chess, as well as real-world problems as diverse as economics, property division, politics, and warfare.Game theory has two distinct branches: combinatorialgame theory and classical game theory.Combinatorial game theory covers two-player games of perfect knowledge such as go, chess, or checkers. Notably, combinatorial games have no chance element, and players take turns.In classical game theory, players move, bet, or strategize simultaneously. Both hidden information and chance elements are frequent features in this branch of game theory, which is also a branch of economics.The..

For a general two-player zero-sum game,If the two are equal, then writewhere is called the value of the game. In this case, there exist optimal strategies for the first and second players.A necessary and sufficient condition for a saddle point to exist is the presence of a payoff matrix element which is both a minimum of its row and a maximum of its column. A game may have more than one saddle point, but all must have the same value.

An matrix which gives the possible outcome of a two-person zero-sum game when player A has possible moves and player B moves. The analysis of the matrix in order to determine optimal strategies is the aim of game theory. The so-called "augmented" payoff matrix is defined as follows:

Let the elements in a payoff matrix be denoted , where the s are player A's strategies and the s are player B's strategies. Player A can get at least(1)for strategy . Player B can force player A to get no more than for a strategy . The best strategy for player A is therefore(2)and the best strategy for player B is(3)In general,(4)Equality holds only if a game saddle point is present, in which case the quantity is called the value of the game.

Ergodic theory can be described as the statistical and qualitative behavior of measurable group and semigroup actions on measure spaces. The group is most commonly N, R, R-+, and Z.Ergodic theory had its origins in the work of Boltzmann in statistical mechanics problems where time- and space-distribution averages are equal. Steinhaus (1999, pp. 237-239) gives a practical application to ergodic theory to keeping one's feet dry ("in most cases," "stormy weather excepted") when walking along a shoreline without having to constantly turn one's head to anticipate incoming waves. The mathematical origins of ergodic theory are due to von Neumann, Birkhoff, and Koopman in the 1930s. It has since grown to be a huge subject and has applications not only to statistical mechanics, but also to number theory, differential geometry, functional analysis, etc. There are also many internal problems (e.g., ergodic theory..

An endomorphism is called ergodic if it is true that implies or 1, where . Examples of ergodic endomorphisms include the map mod 1 on the unit interval with Lebesgue measure, certain automorphisms of the torus, and "Bernoulli shifts" (and more generally "Markov shifts").Given a map and a sigma-algebra, there may be many ergodic measures. If there is only one ergodic measure, then is called uniquely ergodic. An example of a uniquely ergodic transformation is the map mod 1 on the unit interval when is irrational. Here, the unique ergodic measure is Lebesgue measure.

There are at least two maps known as the Hénon map.The first is the two-dimensional dissipative quadraticmap given by the coupled equations(1)(2)(Hénon 1976).The strange attractor illustrated above is obtained for and .The illustration above shows two regions of space for the map with and colored according to the number of iterations required to escape (Michelitsch and Rössler 1989).The plots above show evolution of the point for parameters (left) and (right).The Hénon map has correlation exponent (Grassberger and Procaccia 1983) and capacity dimension (Russell et al. 1980). Hitzl and Zele (1985) give conditions for the existence of periods 1 to 6.A second Hénon map is the quadratic area-preserving map(3)(4)(Hénon 1969), which is one of the simplest two-dimensional invertible maps...

The conjugate gradient method is an algorithm for finding the nearest local minimum of a function of variables which presupposes that the gradient of the function can be computed. It uses conjugate directions instead of the local gradient for going downhill. If the vicinity of the minimum has the shape of a long, narrow valley, the minimum is reached in far fewer steps than would be the case using the method of steepest descent.For a discussion of the conjugate gradient method on vector and shared memory computers, see Dongarra et al. (1991). For discussions of the method for more general parallel architectures, see Demmel et al. (1993) and Ortega (1988) and the references therein.

The Sierpiński sieve is a fractal described by Sierpiński in 1915 and appearing in Italian art from the 13th century (Wolfram 2002, p. 43). It is also called the Sierpiński gasket or Sierpiński triangle. The curve can be written as a Lindenmayer system with initial string "FXF--FF--FF", string rewriting rules "F" -> "FF", "X" -> "--FXF++FXF++FXF--", and angle .The th iteration of the Sierpiński sieve is implemented in the Wolfram Language as SierpinskiMesh[n].Let be the number of black triangles after iteration , the length of a side of a triangle, and the fractional area which is black after the th iteration. Then(1)(2)(3)The capacity dimension is therefore(4)(5)(6)(7)(OEIS A020857; Wolfram 1984; Borwein and Bailey2003, p. 46).The Sierpiński sieve is produced by the beautiful recurrenceequation(8)where denote bitwise..

A measure of a strange attractor which allows the presence of chaos to be distinguished from random noise. It is related to the capacity dimension and information dimension , satisfying(1)It satisfies(2)where is the Kaplan-Yorke dimension. As the cell size goes to zero,(3)where is the correlation dimension.

There are several fractal curves associated with Sierpiński.The area for the first Sierpiński curve illustratedabove (Sierpiński curve 1912) isThe curve is called the Sierpiński curve by Cundy and Rollett (1989, pp. 67-68), the Sierpiński's square snowflake by Wells (1991, p. 229), and is pictured but not named by Steinhaus (1999, pp. 102-103). The th iteration of the first Sierpiński curve is implemented in the Wolfram Language as SierpinskiCurve[n].The limit of the second Sierpiński's curve illustrated above has areaThe Sierpiński arrowhead curveis another Sierpiński curve.

The Koch snowflake is a fractal curve, also known as the Koch island, which was first described by Helge von Koch in 1904. It is built by starting with an equilateral triangle, removing the inner third of each side, building another equilateral triangle at the location where the side was removed, and then repeating the process indefinitely. The Koch snowflake can be simply encoded as a Lindenmayer system with initial string "F--F--F", string rewriting rule "F" -> "F+F--F+F", and angle . The zeroth through third iterations of the construction are shown above.Each fractalized side of the triangle is sometimes known as a Koch curve.The fractal can also be constructed using a base curve and motif, illustrated above.The th iterations of the Koch snowflake is implemented in the Wolfram Language as KochCurve[n].Let be the number of sides, be the length of a single side, be the length of the perimeter, and the snowflake's..

The Sierpiński carpet is the fractal illustrated above which may be constructed analogously to the Sierpiński sieve, but using squares instead of triangles. It can be constructed using string rewriting beginning with a cell [1] and iterating the rules(1)The th iteration of the Sierpiński carpet is implemented in the Wolfram Language as MengerMesh[n].Let be the number of black boxes, the length of a side of a white box, and the fractional area of black boxes after the th iteration. Then(2)(3)(4)(5)The numbers of black cells after , 1, 2, ... iterations are therefore 1, 8, 64, 512, 4096, 32768, 262144, ... (OEIS A001018). The capacity dimension is therefore(6)(7)(8)(9)(OEIS A113210).

A fractal derived from the Kochsnowflake. The base curve and motif for the fractal are illustrated below.The area enclosed by pieces of the curve after the th iteration iswhere is the area of the original equilateral triangle, so from the derivation for the Koch snowflake, the total area enclosed is

The Cesàro fractal is a fractal also known as the torn square fractal. The base curves and motifs for the two fractals illustrated above are shown below.Starting with a unit square, the area of the interior for bend angle is given by(K. Sinha, pers. comm., Jul. 20, 2005).

Let be a rational function(1)where , is the Riemann sphere , and and are polynomials without common divisors. The "filled-in" Julia set is the set of points which do not approach infinity after is repeatedly applied (corresponding to a strange attractor). The true Julia set is the boundary of the filled-in set (the set of "exceptional points"). There are two types of Julia sets: connected sets (Fatou set) and Cantor sets (Fatou dust).Quadratic Julia sets are generated by the quadratic mapping(2)for fixed . For almost every , this transformation generates a fractal. Examples are shown above for various values of . The resulting object is not a fractal for (Dufner et al. 1998, pp. 224-226) and (Dufner et al. 1998, pp. 125-126), although it does not seem to be known if these two are the only such exceptional values.The special case of on the boundary of the Mandelbrot set is called a dendrite fractal (top left figure),..

A dimension also called the fractal dimension, Hausdorff dimension, and Hausdorff-Besicovitch dimension in which nonintegral values are permitted. Objects whose capacity dimension is different from their Lebesgue covering dimension are called fractals. The capacity dimension of a compact metric space is a real number such that if denotes the minimum number of open sets of diameter less than or equal to , then is proportional to as . Explicitly,(if the limit exists), where is the number of elements forming a finite cover of the relevant metric space and is a bound on the diameter of the sets involved (informally, is the size of each element used to cover the set, which is taken to approach 0). If each element of a fractal is equally likely to be visited, then , where is the information dimension.The capacity dimension satisfieswhere is the correlation dimension (correcting the typo in Baker and Gollub 1996)...

Define the "information function" to be(1)where is the natural measure, or probability that element is populated, normalized such that(2)The information dimension is then defined by(3)(4)If every element is equally likely to be visited, then is independent of , and(5)so(6)and(7)(8)(9)(10)where is the capacity dimension.It satisfies(11)where is the capacity dimension and is the correlation dimension (correcting the typo in Baker and Gollub 1996).

A fractal which can be constructed using stringrewriting beginning with a cell [1] and iterating the rules(1)The size of the unit element after the th iteration is(2)and the number of elements is given by the recurrencerelation(3)where , and the first few numbers of elements are 5, 65, 665, 6305, ... (OEIS A118004). Expanding out gives(4)The capacity dimension is therefore(5)(6)Since the dimension of the filled part is 2 (i.e., the square is completely filled), Cantor's square fractal is not a true fractal.

The term "ice fractal" refers to a fractal (square, triangle, etc.) that is based on a simple generating motif. The above plots show the ice triangle, antitriangle, square, and antisquare. The base curves and motifs for the fractals illustrated above are shown below.

Consider the sequence defined by and , where denotes the reverse of a sequence . The first few terms are then 01, 010110, 010110010110011010, .... All words are cubefree (Allouche and Shallit 2003, p. 28, Ex. 1.49). Iterating gives the sequence 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, ... (OEIS A118006)Plotting (mod 2), where denotes the th digit of the infinitely iterated sequence, gives the beautiful pattern shown above, known as Reverend Back's abbey floor (Wegner 1982; Siromoney and Subramanian 1983; Allouche and Shallit 2003, pp. 410-411). Note that this plot is identical to the recurrence plot (mod 2).

A root-finding algorithm based on the iteration formulaThis method, like Newton's method, has poor convergence properties near any point where the derivative .A fractal is obtained by applying Householders's method to finding a root of . Coloring the basin of attraction (the set of initial points which converge to the same root) for each root a different color then gives the above plots.

The Cantor function is the continuous but not absolutely continuous function on which may be defined as follows. First, express in ternary. If the resulting ternary digit string contains the digit 1, replace every ternary digit following the 1 by a 0. Next, replace all 2's with 1's. Finally, interpret the result as a binary number which then gives .The Cantor function is a particular case of a devil's staircase (Devaney 1987, p. 110), and can be extended to a function for , with corresponding to the usual Cantor function (Gorin and Kukushkin 2004).Chalice (1991) showed that any real-valued function on which is monotone increasing and satisfies 1. , 2. , 3. is the Cantor function (Chalice 1991; Wagon 2000, p. 132).Gorin and Kukushkin (2004) give the remarkable identityfor integer . For and , 2, ..., this gives the first few values as 1/2, 3/10, 1/5, 33/230, 5/46, 75/874, ... (OEIS A095844 and A095845).M. Trott (pers. comm., June..

Cantor dust is a fractal that can be constructed using string rewriting beginning with a cell [0] and iterating the rules(1)The th iteration of Cantor dust is implemented in the Wolfram Language as CantorMesh[n, 2].Let be the number of black boxes, the length of a side of a box, and the fractional area of black boxes after the th iteration, then(2)(3)(4)(5)The number of black squares after , 1, 2, ... iterations is therefore 1, 4, 16, 64, 256, 1024, 4096, 16384, ... (OEIS A000302). The capacity dimension is therefore(6)(7)(8)(9)

The fractal-like figure obtained by performing the same iteration as for the Mandelbrot set, but adding a random component ,In the above plot, , where .

Informally, self-similar objects with parameters and are described by a power law such aswhereis the "dimension" of the scaling law, knownas the Hausdorff dimension.Formally, let be a subset of a metric space . Then the Hausdorff dimension of is the infimum of such that the -dimensional Hausdorff measure of is 0 (which need not be an integer).In many cases, the Hausdorff dimension correctly describes the correction term for a resonator with fractal perimeter in Lorentz's conjecture. However, in general, the proper dimension to use turns out to be the Minkowski-Bouligand dimension (Schroeder 1991).

For a fractal process with values and , the correlation between these two values is given by the Brown functionalso known as the Bachelier function, Lévy function, or Wiener function.

The Haferman carpet is the beautiful fractal constructed using string rewriting beginning with a cell [1] and iterating the rules(1)(Allouche and Shallit 2003, p. 407).Taking five iterations gives the beautiful pattern illustrated above.This fractal also appears on the cover of Allouche and Shallit (2003).Let be the number of black boxes, the length of a side of a white box, and the fractional area of black boxes after the th iteration. Then(2)(3)The numbers of black cells after , 1, 2, ... iterations are therefore 1, 4, 61, 424, 4441, 36844, ... (OEIS A118005). The capacity dimension is therefore(4)(5)

The box fractal is a fractal also called the anticross-stitch curve which can be constructed using string rewriting beginning with a cell [1] and iterating the rules(1)An outline of the box fractal can encoded as a Lindenmayer system with initial string "F-F-F-F", string rewriting rule "F" -> "F-F+F+F-F", and angle (J. Updike, pers. comm., Oct. 26, 2004).Let be the number of black boxes, the length of a side of a white box, and the fractional area of black boxes after the th iteration.(2)(3)(4)(5)The sequence is then 1, 5, 25, 125, 625, 3125, 15625, ... (OEIS A000351). The capacity dimension is therefore(6)(7)(8)(9)(OEIS A113209).

The Gosper island (Mandelbrot 1977), also known as a flowsnake (Gardner 1989, p. 41), is a fractal that is a modification of the Koch snowflake. The term "Gosper island" was used by Mandelbrot (1977) because this curve bounds the space filled by the Peano-Gosper curve.It has fractal dimension(OEIS A113211).Gosper islands can tile the plane(Gardner 1989, p. 41).

A Julia set fractal obtainedby iterating the functionwhere is the sign function and is the real part of . The plot above sets and uses a maximum of 50 iterations with escape radius 2.

The pentaflake is a fractal with 5-fold symmetry. As illustrated above, five pentagons can be arranged around an identical pentagon to form the first iteration of the pentaflake. This cluster of six pentagons has the shape of a pentagon with five triangular wedges removed. This construction was first noticed by Albrecht Dürer (Dixon 1991).For a pentagon of side length 1, the first ring of pentagons has centers at radius(1)where is the golden ratio. The inradius and circumradius are related by(2)and these are related to the side length by(3)The height is(4)giving a radius of the second ring as(5)Continuing, the th pentagon ring is located at(6)Now, the length of the side of the first pentagon compound is given by(7)so the ratio of side lengths of the original pentagon to that of the compound is(8)We can now calculate the dimension of the pentaflake fractal. Let be the number of black pentagons and the length of side of a pentagon after the iteration,(9)(10)The..

A one-dimensional map whose increments are distributed according to a normal distribution. Let and be values, then their correlation is given by the Brown functionWhen , and the fractal process corresponds to one-dimensional Brownian motion. If , then and the process is called a persistent process. If , then and the process is called an antipersistent process.

The attractor of the iteratedfunction system given by the set of "fern functions"(1)(2)(3)(4)(Barnsley 1993, p. 86; Wagon 1991). These affine transformations are contractions. The tip of the fern (which resembles the black spleenwort variety of fern) is the fixed point of , and the tips of the lowest two branches are the images of the main tip under and (Wagon 1991).

The term "fractal dimension" is sometimes used to refer to what is more commonly called the capacity dimension of a fractal (which is, roughly speaking, the exponent in the expression , where is the minimum number of open sets of diameter needed to cover the set). However, it can more generally refer to any of the dimensions commonly used to characterize fractals (e.g., capacity dimension, correlation dimension, information dimension, Lyapunov dimension, Minkowski-Bouligand dimension).

A fractal is an object or quantity that displays self-similarity, in a somewhat technical sense, on all scales. The object need not exhibit exactly the same structure at all scales, but the same "type" of structures must appear on all scales. A plot of the quantity on a log-log graph versus scale then gives a straight line, whose slope is said to be the fractal dimension. The prototypical example for a fractal is the length of a coastline measured with different length rulers. The shorter the ruler, the longer the length measured, a paradox known as the coastline paradox.Illustrated above are the fractals known as the Gosper island, Koch snowflake, box fractal, Sierpiński sieve, Barnsley's fern, and Mandelbrot set.

A Julia set consisting of a set of isolated points which is formed by taking a point outside an underlying set (e.g., the Mandelbrot set). If the point is outside but near the boundary of , the Fatou set resembles the Julia set for nearby points within . As the point moves further away, however, the set becomes thinner and is called Fatou dust.

A fractal based on iterating the map(1)according to(2)(3)The plots above show iterations of this map for various starting values and parameters .

A two-dimensional piecewise linear map defined by(1)(2)The map is chaotic in the filled region above and stable in the six hexagonal regions. Each point in the interior hexagon defined by the vertices (0, 0), (1, 0), (2, 1), (2, 2), (1, 2), and (0, 1) has an orbit with period six (except the point (1, 1), which has period 1). Orbits in the other five hexagonal regions circulate from one to the other. There is a unique orbit of period five, with all others having period 30. The points having orbits of period five are (, 3), (, ), (3, ), (5, 3), and (3, 5), indicated in the above figure by the black line. However, there are infinitely many distinct periodic orbits which have an arbitrarily long period.

The two-dimensional map(1)(2)where(3)(Zaslavskii 1978). It has correlation exponent (Grassberger and Procaccia 1983) and capacity dimension 1.39 (Russell et al. 1980).

Consider an arbitrary one-dimensional map(1)(with implicit parameter ) at the onset of chaos. After a suitable rescaling, the Feigenbaum function(2)is obtained. This function satisfies(3)with .Proofs for the existence of an even analytic solution to this equation, sometimes called the Feigenbaum-Cvitanović functional equation, have been given by Campanino and Epstein (1981), Campanino et al. (1982), and Lanford (1982, 1984).The picture above illustrate the Feigenbaum function for the logistic map with ,(4)along the real axis (M. Trott, pers. comm., Sept. 9, 2003).The images above show two views of a sculpture presented by Stephen Wolfram to Mitchell Feigenbaum on the occasion of his 60th birthday that depicts the Feigenbaum function in the complex plane. The sculpture (photos courtesy of A. Young) was designed by M. Trott and laser-etched into a block of glass by Bathsheba Grossman (https://www.bathsheba.com/)...

The winding number of a map with initial value is defined bywhich represents the average increase in the angle per unit time (average frequency). A system with a rational winding number is mode-locked, whereas a system with an irrational winding number is quasiperiodic. Note that since the rationals are a set of zero measure on any finite interval, almost all winding numbers will be irrational, so almost all maps will be quasiperiodic.

Consider a set of points on an attractor, then the correlation integral iswhere is the number of pairs whose distance . For small ,where is the correlation exponent.

An attracting set that has zero measure in the embedding phase space and has fractal dimension. Trajectories within a strange attractor appear to skip around randomly.A selection of strange attractors for a general quadraticmap(1)(2)are illustrated above, where the letters to stand for coefficients of the quadratic from to 1.2 in steps of 0.1 (Sprott 1993c). These represent a small selection of the approximately 1.6% of all possible such maps that are chaotic (Sprott 1993bc).

A two-dimensional map also called the Taylor-Greene-Chirikovmap in some of the older literature and defined by(1)(2)(3)where and are computed mod and is a positive constant. Surfaces of section for various values of the constant are illustrated above.An analytic estimate of the width of the chaotic zone (Chirikov1979) finds(4)Numerical experiments give and . The value of at which global chaos occurs has been bounded by various authors. Greene's Method is the most accurate method so far devised.authorboundexactapprox.Hermann0.029411764Celletti and Chierchia (1995)0.838Greene-0.971635406MacKay and Percival (1985)0.984375000Mather1.333333333Fixed points are found by requiring that(5)(6)The first gives , so and(7)The second requirement gives(8)The fixed points are therefore and . In order to perform a linear stability analysis, take differentials of the variables(9)(10)In matrix form,(11)The eigenvalues are found..

An algorithm originally described by Barnsley in 1988. Pick a point at random inside a regular -gon. Then draw the next point a fraction of the distance between it and a polygon vertex picked at random. Continue the process (after throwing out the first few points). The result of this "chaos game" is sometimes, but not always, a fractal. The results of the chaos game are shown above for several values of .The above plots show the chaos game for points in the regular 3-, 4-, 5-, and 6-gons with . The case gives the interior of a square with all points visited with equal probability.The above plots show the chaos game for points in the square with , 0.4, 0.5, 0.6, 0.75, and 0.9.

"Chaos" is a tricky thing to define. In fact, it is much easier to list properties that a system described as "chaotic" has rather than to give a precise definition of chaos.Gleick (1988, p. 306) notes that "No one [of the chaos scientists he interviewed] could quite agree on [a definition of] the word itself," and so instead gives descriptions from a number of practitioners in the field. For example, he quotes Philip Holmes (apparently defining "chaotic") as, "The complicated aperiodic attracting orbits of certain, usually low-dimensional dynamical systems." Similarly, he quotes Bai-Lin Hao describing chaos (roughly) as "a kind of order without periodicity."It turns out that even textbooks devoted to chaos do not really define the term. For example, Wiggins (1990, p. 437) says, "A dynamical system displaying sensitive dependence on initial conditions..

A two-dimensional map which is conjugate to the Hénonmap in its nondissipative limit. It is given by(1)(2)

Isolated resonances in a dynamical system can cause considerable distortion of preserved tori in their neighborhood, but they do not introduce any chaos into a system. However, when two or more resonances are simultaneously present, they will render a system nonintegrable. Furthermore, if they are sufficiently "close" to each other, they will result in the appearance of widespread (large-scale) chaos.To investigate this problem, Walker and Ford (1969) took the integrable Hamiltonianand investigated the effect of adding a 2:2 resonance and a 3:2 resonanceAt low energies, the resonant zones are well-separated. As the energy increases, the zones overlap and a "macroscopic zone of instability" appears. When the overlap starts, many higher-order resonances are also involved so fairly large areas of phase space have their tori destroyed and the ensuing chaos is "widespread" since trajectories are now..

An attractor is a set of states (points in the phase space), invariant under the dynamics, towards which neighboring states in a given basin of attraction asymptotically approach in the course of dynamic evolution. An attractor is defined as the smallest unit which cannot be itself decomposed into two or more attractors with distinct basins of attraction. This restriction is necessary since a dynamical system may have multiple attractors, each with its own basin of attraction.Conservative systems do not have attractors, since the motion is periodic. For dissipative dynamical systems, however, volumes shrink exponentially so attractors have 0 volume in -dimensional phase space.A stable fixed point surrounded by a dissipative region is an attractor known as a map sink. Regular attractors (corresponding to 0 Lyapunov characteristic exponents) act as limit cycles, in which trajectories circle around a limiting trajectory which they..

A phenomenon in which a system being forced at an irrational period undergoes rational, periodic motion which persists for a finite range of forcing values. It may occur for strong couplings between natural and forcing oscillation frequencies.The phenomenon can be exemplified in the circle map when, after iterations of the map, the new angle differs from the initial value by a rational numberThis is the form of the unperturbed circle map withthe map winding numberFor not a rational number, the trajectory is quasiperiodic.

Consider the circle map. If is nonzero, then the motion is periodic in some finite region surrounding each rational . This execution of periodic motion in response to an irrational forcing is known as mode locking. If a plot is made of versus with the regions of periodic mode-locked parameter space plotted around rational values (the map winding numbers), then the regions are seen to widen upward from 0 at to some finite width at . The region surrounding each rational number is known as an Arnold tongue.At , the Arnold tongues are an isolated set of measure zero. At , they form a general cantor set of dimension (Rasband 1990, p. 131). In general, an Arnold tongue is defined as a resonance zone emanating out from rational numbers in a two-dimensional parameter space of variables...

The Sharpe ratio is a risk-adjusted financial measure developed by Nobel Laureate William Sharpe. It uses a fund's standard deviation and excess return to determine the reward per unit of risk. The higher a fund's Sharpe ratio, the better the fund's "risk-adjusted" performance, given bywhere is the return on the portfolio, is the risk-free return and is the standard deviation of the fund's returns (i.e., the portfolio risk).

Let be the principal (initial investment), be the annual compounded rate, the "nominal rate," be the number of times interest is compounded per year (i.e., the year is divided into conversion periods), and be the number of years (the "term"). The interest rate per conversion period is then(1)If interest is compounded times at an annual rate of (where, for example, 10% corresponds to ), then the effective rate over the time (what an investor would earn if he did not redeposit his interest after each compounding) is(2)The total amount of holdings after a time when interest is re-invested is then(3)Note that even if interest is compounded continuously, the return is still finite since(4)where e is the base of the naturallogarithm.The time required for a given principal to double (assuming conversion period) is given by solving(5)or(6)where ln is the natural logarithm. This function can be approximated by the so-called..

The simple first-order difference equation(1)where(2)(3)and(4)(5)are the price-demand and price-supply curves, where and represent the slope and -intercept, respectively, for the demand curve, and and represent the corresponding constants for the supply curve (Ezekiel 1938, Goldberg 1986).A class of behaviors related to this equation is known as "Cobweb phenomena" in economics.

Actuarial science is the study of risk through the use of mathematics, probability, and statistics. A person who performs risk assessment is known as an actuary. Actuaries typically are employed in financial, insurance, pensions, and other related sectors.Actuarial science is similar to medicine in that a lot of time must be taken for schooling and taking examinations, but salaries are typically rather high.The Season 1 episode "Sacrifice" (2005) of the television crime drama NUMB3RS mentions actuarial science.

- 1
- 2

Math Topics

Check the price

for your project

for your project

Price calculator

We've got the best prices, check out yourself!