Sort by:

The span of subspace generated by vectors and isA set of vectors can be tested to see if they span -dimensional space using the following Wolfram Language function: SpanningVectorsQ[m_List?MatrixQ] := (NullSpace[m] == {})

A lossless data compression algorithm which uses a small number of bits to encode common characters. Huffman coding approximates the probability for each character as a power of 1/2 to avoid complications associated with using a nonintegral number of bits to encode characters using their actual probabilities.Huffman coding works on a list of weights by building an extended binary tree with minimum weighted external path length and proceeds by finding the two smallest s, and , viewed as external nodes, and replacing them with an internal node of weight . The procedure is them repeated stepwise until the root node is reached. An individual external node can then be encoded by a binary string of 0s (for left branches) and 1s (for right branches).The procedure is summarized below for the weights 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, and 41 given by the first 13 primes, and the resulting tree is shown above (Knuth 1997, pp. 402-403). As is clear..

A random permutation is a permutation containing a fixed number of a random selection from a given set of elements. There are two main algorithms for constructing random permutations. The first constructs a vector of random real numbers and uses them as keys to records containing the integers 1 to . The second starts with an arbitrary permutation and then exchanges the th element with a randomly selected one from the first elements for , ..., (Skiena 1990).A random permutation on the integers can be implemented in the Wolfram Language as RandomSample[Range[n]]. A random permutation in the permutation graph pg can be computed using RandomPermutation[pg], and such random permutations by RandomPermutation[pg, n]. random permutations in the symmetric group of order can be computed using RandomPermutation[d, n].There are an average of permutation inversions in a permutation on elements (Skiena 1990, p. 29). The expected number of permutation..

A generalized Vandermonde matrix of two sequences and where is an increasing sequence of positive integers and is an increasing sequence of nonnegative integers of the same length is the outer product of and with multiplication operation given by the power function. The generalized Vandermonde matrix can be implemented in the Wolfram Language as Vandermonde[a_List?VectorQ, b_List?VectorQ] := Outer[Power, a, b] /; Equal @@ Length /@ {a, b}A generalized Vandermonde matrix is a minor of a Vandermonde matrix. Alternatively, it has the same form as a Vandermonde matrix , where is an increasing sequence of positive integers, except now is any increasing sequence of nonnegative integers. In the special case of a Vandermonde matrix, .While there is no general formula for the determinant of a generalized Vandermonde matrix, its determinant is always positive. Since any minor of a generalized Vandermonde matrix is also a generalized Vandermonde..

A random matrix is a matrix of given type and size whoseentries consist of random numbers from some specified distribution.Random matrix theory is cited as one of the "modern tools" used in Catherine'sproof of an important result in prime number theory in the 2005 film Proof.For a real matrix with elements having a standard normal distribution, the expected number of real eigenvalues is given by(1)(2)where is a hypergeometric function and is a beta function (Edelman et al. 1994, Edelman and Kostlan 1994). has asymptotic behavior(3)Let be the probability that there are exactly real eigenvalues in the complex spectrum of the matrix. Edelman (1997) showed that(4)which is the smallest probability of all s. The entire probability function of the number of expected real eigenvalues in the spectrum of a Gaussian real random matrix was derived by Kanzieper and Akemann (2005) as(5)where(6)(7)In (6), the summation runs over all partitions..

Gaussian elimination is a method for solving matrixequations of the form(1)To perform Gaussian elimination starting with the system of equations(2)compose the "augmented matrix equation"(3)Here, the column vector in the variables is carried along for labeling the matrix rows. Now, perform elementary row operations to put the augmented matrix into the upper triangular form(4)Solve the equation of the th row for , then substitute back into the equation of the st row to obtain a solution for , etc., according to the formula(5)In the Wolfram Language, RowReduce performs a version of Gaussian elimination, with the equation being solved by GaussianElimination[m_?MatrixQ, v_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{m, v}]]LU decomposition of a matrix is frequently usedas part of a Gaussian elimination process for solving a matrix equation.A matrix that has undergone Gaussian elimination is said to be in echelonform.For..

A square matrix is a unitary matrix if(1)where denotes the conjugate transpose and is the matrix inverse. For example,(2)is a unitary matrix.Unitary matrices leave the length of a complex vectorunchanged.For real matrices, unitary is the same as orthogonal. In fact, there are some similarities between orthogonal matrices and unitary matrices. The rows of a unitary matrix are a unitary basis. That is, each row has length one, and their Hermitian inner product is zero. Similarly, the columns are also a unitary basis. In fact, given any unitary basis, the matrix whose rows are that basis is a unitary matrix. It is automatically the case that the columns are another unitary basis.A matrix can be tested to see if it is unitary using the Wolfram Language function: UnitaryQ[m_List?MatrixQ] := ([email protected] @ m . m == IdentityMatrix @ Length @ m)The definition of a unitary matrix guarantees that(3)where is the identity matrix. In particular,..

The permanent is an analog of a determinant where all the signs in the expansion by minors are taken as positive. The permanent of a matrix is the coefficient of in(1)(Vardi 1991). Another equation is the Ryser formula(2)where the sum is over all subsets of , and is the number of elements in (Vardi 1991). Muir (1960, p. 19) uses the notation to denote a permanent.The permanent can be implemented in the WolframLanguage as Permanent[m_List] := With[{v = Array[x, Length[m]]}, Coefficient[Times @@ (m.v), Times @@ v] ]The computation of permanents has been studied fairly extensively in algebraic complexity theory. The complexity of the best-known algorithms grows as the exponent of the matrix size (Knuth 1998, p. 499), which would appear to be very surprising, given the permanent's similarity to the tractable determinant. Computation of the permanent is #P-complete (i.e, sharp-P complete; Valiant 1979).If is a unitary matrix, then(3)(Minc..

Trigonometric functions of for an integer cannot be expressed in terms of sums, products, and finite root extractions on real rational numbers because 7 is not a Fermat prime. This also means that the heptagon is not a constructible polygon.However, exact expressions involving roots of complex numbers can still bederived either using the trigonometric identity(1)with or by expressing in terms of complex exponentials and simplifying the resulting expression. Letting denote the th root of the polynomial using the ordering of the Wolfram Language's Root function gives the following algebraic root representations for trigonometric functions with argument ,(2)(3)(4)(5)(6)(7)with argument ,(8)(9)(10)(11)(12)(13)and with argument ,(14)(15)(16)(17)(18)(19)Root and Galois-minimal expressions can be obtained using WolframLanguage code such as the following: RootReduce[TrigToRadicals[Sin[Pi/7]]] Developer`TrigToRadicals[Sin[Pi/7]]Combinations..

A matrix is an orthogonal matrix if(1)where is the transpose of and is the identity matrix. In particular, an orthogonal matrix is always invertible, and(2)In component form,(3)This relation make orthogonal matrices particularly easy to compute with, since the transpose operation is much simpler than computing an inverse.For example,(4)(5)are orthogonal matrices. A matrix can be tested to see if it is orthogonal using the Wolfram Language code: OrthogonalMatrixQ[m_List?MatrixQ] := (Transpose[m].m == IdentityMatrix @ Length @ m)The rows of an orthogonal matrix are an orthonormal basis. That is, each row has length one, and are mutually perpendicular. Similarly, the columns are also an orthonormal basis. In fact, given any orthonormal basis, the matrix whose rows are that basis is an orthogonal matrix. It is automatically the case that the columns are another orthonormal basis.The orthogonal matrices are precisely those matrices..

A square matrix is a normal matrix ifwhere is the commutator and denotes the conjugate transpose. For example, the matrixis a normal matrix, but is not a Hermitian matrix. A matrix can be tested to see if it is normal using Wolfram Language function: NormalMatrixQ[a_List?MatrixQ] := Module[ {b = Conjugate @ Transpose @ a}, a. b === b. a ]Normal matrices arise, for example, from a normalequation.The normal matrices are the matrices which are unitarily diagonalizable, i.e., is a normal matrix iff there exists a unitary matrix such that is a diagonal matrix. All Hermitian matrices are normal but have real eigenvalues, whereas a general normal matrix has no such restriction on its eigenvalues. All normal matrices are diagonalizable, but not all diagonalizable matrices are normal.The following table gives the number of normal square matrices of given types for orders , 2, ....typeOEIScountsA0555472, 8, 68, 1124, ...A0555482, 12, 80, 2096, ...A0555493,..

For every even dimension , the symplectic group is the group of matrices which preserve a nondegenerate antisymmetric bilinear form , i.e., a symplectic form.Every symplectic form can be put into a canonical form by finding a symplectic basis. So, up to conjugation, there is only one symplectic group, in contrast to the orthogonal group which preserves a nondegenerate symmetric bilinear form. As with the orthogonal group, the columns of a symplectic matrix form a symplectic basis.Since is a volume form, the symplectic group preserves volume and vector space orientation. Hence, . In fact, is just the group of matrices with determinant 1. The three symplectic (0,1)-matrices are therefore(1)The matrices(2)and(3)are in , where(4)In fact, both of these examples are 1-parameter subgroups.A matrix can be tested to see if it is symplectic using the WolframLanguage code: SymplecticForm[n_Integer] := Join[PadLeft[IdentityMatrix[n], {n,..

A symmetric matrix is a square matrix that satisfies(1)where denotes the transpose, so . This also implies(2)where is the identity matrix. For example,(3)is a symmetric matrix. Hermitian matrices are a useful generalization of symmetric matrices for complex matricesA matrix can be tested to see if it is symmetric using the Wolfram Language code: SymmetricQ[m_List?MatrixQ] := (m === Transpose[m])Written explicitly, the elements of a symmetric matrix have the form(4)The symmetric part of any matrixmay be obtained from(5)A matrix is symmetric if it can be expressed in the form(6)where is an orthogonal matrix and is a diagonal matrix. This is equivalent to the matrix equation(7)which is equivalent to(8)for all , where . Therefore, the diagonal elements of are the eigenvalues of , and the columns of are the corresponding eigenvectors.The numbers of symmetric matrices of order on symbols are , , , , ..., . Therefore, for (0,1)-matrices, the..

A minor is the reduced determinant of a determinant expansion that is formed by omitting the th row and th column of a matrix . So, for example, the minor of the above matrix is given byThe th minor can be computed in the Wolfram Language using Minor[m_List?MatrixQ, {i_Integer, j_Integer}] := Det[Drop[Transpose[Drop[Transpose[m], {j}]], {i}]]The Wolfram Language's built-in Minors[m] command instead gives the minors of a matrix obtained by deleting the st row and st column of , while Minors[m, k] gives the th minors of . The Minor code above therefore corresponds to th entry of MinorMatrix[m_List?MatrixQ] := Map[Reverse, Minors[m], {0, 1}]i.e., the definition Minors[m, i, j] is equivalent to MinorMatrix[m][[i, j]].

Pairs of partitions for a single number whose Ferrers diagrams transform into each other when reflected about the line , with the coordinates of the upper left dot taken as (0, 0), are called conjugate (or transpose) partitions. For example, the conjugate partitions illustrated above correspond to the partitions and of 15. A partition that is conjugate to itself is said to be a self-conjugate partition.The conjugate partition of a given partition can be implemented in the Wolfram Language as follows: ConjugatePartition[l_List] := Module[ {i, r = Reverse[l], n = Length[l]}, Table[ n + 1 - Position[r, _?(# >= i&), Infinity, 1][[1, 1]], {i, l[[1]]} ] ]

The minimal polynomial of a matrix is the monic polynomial in of smallest degree such that(1)The minimal polynomial divides any polynomial with and, in particular, it divides the characteristic polynomial.If the characteristic polynomial factorsas(2)then its minimal polynomial is given by(3)for some positive integers , where the satisfy .For example, the characteristic polynomial of the zero matrix is , whiles its minimal polynomial is . However, the characteristic polynomial and minimal polynomial of(4)are both .The following Wolfram Language code will find the minimal polynomial for the square matrix in the variable . MatrixMinimalPolynomial[a_List?MatrixQ,x_]:=Module[ { i, n=1, qu={}, mnm={Flatten[IdentityMatrix[Length[a]]]} }, While[Length[qu]==0, AppendTo[mnm,Flatten[MatrixPower[a,n]]]; qu=NullSpace[Transpose[mnm]]; n++ ]; First[qu].Table[x^i,{i,0,n-1}] ]..

The companion matrix to a monic polynomial(1)is the square matrix(2)with ones on the subdiagonal and the last column given by the coefficients of . Note that in the literature, the companion matrix is sometimes defined with the rows and columns switched, i.e., the transpose of the above matrix.When is the standard basis, a companion matrix satisfies(3)for , as well as(4)including(5)The matrix minimal polynomial of the companion matrix is therefore , which is also its characteristic polynomial.Companion matrices are used to write a matrix in rational canonical form. In fact, any matrix whose matrix minimal polynomial has polynomial degree is similar to the companion matrix for . The rational canonical form is more interesting when the degree of is less than .The following Wolfram Language command gives the companion matrix for a polynomial in the variable . CompanionMatrix[p_, x_] := Module[ {n, w = CoefficientList[p, x]}, w = -w/Last[w];..

Given a set of equations in variables , ..., , written explicitly as(1)or more explicitly as(2)the Jacobian matrix, sometimes simply called "the Jacobian" (Simon and Blume 1994) is defined by(3)The determinant of is the Jacobian determinant (confusingly, often called "the Jacobian" as well) and is denoted(4)The Jacobian matrix and determinant can be computed in the WolframLanguage using JacobianMatrix[f_List?VectorQ, x_List] := Outer[D, f, x] /; Equal @@ (Dimensions /@ {f, x}) JacobianDeterminant[f_List?VectorQ, x_List] := Det[JacobianMatrix[f, x]] /; Equal @@ (Dimensions /@ {f, x})Taking the differential(5)shows that is the determinant of the matrix , and therefore gives the ratios of -dimensional volumes (contents) in and ,(6)It therefore appears, for example, in the changeof variables theorem.The concept of the Jacobian can also be applied to functions in more than variables. For example, considering..

An axiom proposed by Huntington (1933) as part of his definition of a Booleanalgebra,(1)where denotes NOT and denotes OR. Taken together, the three axioms consisting of (1), commutativity(2)and associativity(3)are equivalent to the axioms of Boolean algebra.The Huntington operator can be defined in the WolframLanguage by: Huntington := Function[{x, y}, ! (! x \[Or] y) \[Or] ! (! x \[Or] ! y)]That the Huntington axiom is a true statement in Booleanalgebra can be verified by examining its truth table.TTTTFTFTFFFF

A square matrix is a special unitary matrix if(1)where is the identity matrix and is the conjugate transpose matrix, and the determinant is(2)The first condition means that is a unitary matrix, and the second condition provides a restriction beyond a general unitary matrix, which may have determinant for any real number. For example,(3)is a special unitary matrix. A matrix can be tested to see if it is a special unitary matrix using the Wolfram Language function SpecialUnitaryQ[m_List?MatrixQ] := (Conjugate @ Transpose @ m . m == IdentityMatrix @ Length @ m&& Det[m] == 1)The special unitary matrices are closed under multiplication and the inverse operation, and therefore form a matrix group called the special unitary group .

A block diagonal matrix, also called a diagonal block matrix, is a square diagonal matrix in which the diagonal elements are square matrices of any size (possibly even ), and the off-diagonal elements are 0. A block diagonal matrix is therefore a block matrix in which the blocks off the diagonal are the zero matrices, and the diagonal matrices are square.Block diagonal matrices can be constructed out of submatrices in the WolframLanguage using the following code snippet: BlockDiagonalMatrix[b : {__?MatrixQ}] := Module[{r, c, n = Length[b], i, j}, {r, c} = Transpose[Dimensions /@ b]; ArrayFlatten[ Table[If[i == j, b[[i]], ConstantArray[0, {r[[i]], c[[j]]}]], {i, n}, {j, n} ] ] ]

A square matrix is a special orthogonal matrix if(1)where is the identity matrix, and the determinant satisfies(2)The first condition means that is an orthogonal matrix, and the second restricts the determinant to (while a general orthogonal matrix may have determinant or ). For example,(3)is a special orthogonal matrix since(4)and its determinant is . A matrix can be tested to see if it is a special orthogonal matrix using the Wolfram Language code SpecialOrthogonalQ[m_List?MatrixQ] := (Transpose[m] . m == IdentityMatrix @ Length @ m&& Det[m] == 1)The special orthogonal matrices are closed under multiplication and the inverse operation, and therefore form a matrix group called the special orthogonal group .

Householder (1953) first considered the matrix that now bears his name in the first couple of pages of his book. A Householder matrix for a real vector can be implemented in the Wolfram Language as: HouseholderMatrix[v_?VectorQ] := IdentityMatrix[Length[v]] - 2 Transpose[{v}] . {v} / (v.v)Trefethen and Bau (1997) gave an incorrect version of the formula for complex . D. Laurie gave a correct version by interpreting reflection along a given direction not as(1)where(2)is the projection onto the hyperplane orthogonal to (since this is in general not a unitary transformation), but as(3)Lehoucq (1996) independently gave an interpretation that still uses the formula , but choosing to be unitary.

An array is a "list of lists" with the length of each level of list the same. The size (sometimes called the "shape") of a -dimensional array is then indicated as . The most common type of array encountered is the two-dimensional rectangular array having columns and rows. If , a square array results. Sometimes, the order of the elements in an array is significant (as in a matrix), whereas at other times, arrays which are equivalent modulo reflections (and rotations, in the case of a square array) are considered identical (as in a magic square or prime array).In the Wolfram Language, an array of depth is represented using nested lists, and can be generated using the command Array[a, i, j, ...]. Similarly, the dimensions of an array can be found using Dimensions[t], and the command ArrayQ[expr] tests if an expression is a full array. Taking for example t=Array[a,{2,2,2,3}]gives the depth-4 list {{{{a[1,1,1,1],a[1,1,1,2],a[1,1,1,3]},..

An antisymmetric matrix is a square matrix thatsatisfies the identity(1)where is the matrix transpose. For example,(2)is antisymmetric. Antisymmetric matrices are commonly called "skew symmetric matrices" by mathematicians.A matrix may be tested to see if it is antisymmetric using the Wolfram Language function AntisymmetricQ[m_List?MatrixQ] := (m === -Transpose[m])In component notation, this becomes(3)Letting , the requirement becomes(4)so an antisymmetric matrix must have zeros on its diagonal. The general antisymmetric matrix is of the form(5)Applying to both sides of the antisymmetry condition gives(6)Any square matrix can be expressed as the sum of symmetric and antisymmetric parts. Write(7)But(8)(9)so(10)which is symmetric, and(11)which is antisymmetric.All antisymmetric matrices of odd dimension are singular. This follows from the fact that(12)So, by the properties of determinants,(13)(14)Therefore,..

A square matrix is called Hermitian if it is self-adjoint. Therefore, a Hermitian matrix is defined as one for which(1)where denotes the conjugate transpose. This is equivalent to the condition(2)where denotes the complex conjugate. As a result of this definition, the diagonal elements of a Hermitian matrix are real numbers (since ), while other elements may be complex.Examples of Hermitian matrices include(3)and the Pauli matrices(4)(5)(6)Examples of Hermitian matrices include(7)An integer or real matrix is Hermitian iff it is symmetric. A matrix can be tested to see if it is Hermitian using the Wolfram Language function HermitianQ[m_List?MatrixQ] := (m === [email protected]@m)Hermitian matrices have real eigenvalues whose eigenvectors form a unitary basis. For real matrices, Hermitian is the same as symmetric.Any matrix which is not Hermitian can be expressed as the sum of a Hermitian matrix and a antihermitian matrix using(8)Let..

A square matrix is antihermitian if it satisfies(1)where is the adjoint. For example, the matrix(2)is an antihermitian matrix. Antihermitian matrices are often called "skew Hermitian matrices" by mathematicians.A matrix can be tested to see if it is antihermitian using the Wolfram Language function AntihermitianQ[m_List?MatrixQ] := (m === -Conjugate[Transpose[m]])The set of antihermitian matrices is a vector space, and the commutator(3)of two antihermitian matrices is antihermitian. Hence, the antihermitian matrices are a Lie algebra, which is related to the Lie group of unitary matrices. In particular, suppose is a path of unitary matrices through , i.e.,(4)for all , where is the adjoint and is the identity matrix. The derivative at of both sides must be equal so(5)That is, the derivative of at the identity must be antihermitian.The matrix exponential map of an antihermitianmatrix is a unitary matrix...

A formula which transforms a given coordinate system by rotating it through a counterclockwise angle about an axis . Referring to the above figure (Goldstein 1980), the equation for the "fixed" vector in the transformed coordinate system (i.e., the above figure corresponds to an alias transformation), is(1)(2)(3)(Goldstein 1980; Varshalovich et al. 1988, p. 24). The angle and unit normal may also be expressed as Euler angles. In terms of the Euler parameters,(4)The rotation matrix can be calculated in the Wolfram Language as follows: With[{n = {nx, ny, nz}}, Cos[phi] IdentityMatrix[3] + (1 - Cos[p]) Outer[Times, n, n] + Sin[p] {{0, n[[3]], -n[[2]]}, {-n[[3]], 0, n[[1]]}, {n[[2]], -n[[1]], 0}} ]

A sequence forms a (binary) heap if it satisfies for , where is the floor function, which is equivalent to and for . The first member must therefore be the smallest. A heap can be viewed as a labeled binary tree in which the label of the th node is smaller than the labels of any of its descendents (Skiena 1990, p. 35). Heaps support arbitrary insertion and seeking/deletion of the minimum value in times per update (Skiena 1990, p. 38).A list can be converted to a heap in times using an algorithm due to Floyd (1964). A binary heap can be generated from a permutation using Heapify[p] in the Wolfram Language package Combinatorica` . For example, given the random permutation , Floyd's algorithm gives the heap (left figure). The right figure shows a heap containing 30 elements.A permutation can be tested to see if it is a heap using the following Wolfram Language functions. HeapQ[a_List] := Module[{i, n = Length[a]}, And @@ Table[a[[Floor[i/2]]]..

An affine variety is an algebraic variety contained in affine space. For example,(1)is the cone, and(2)is a conic section, which is a subvariety of the cone. The cone can be written to indicate that it is the variety corresponding to . Naturally, many other polynomials vanish on , in fact all polynomials in . The set is an ideal in the polynomial ring . Note also, that the ideal of polynomials vanishing on the conic section is the ideal generated by and .A morphism between two affine varieties is given by polynomial coordinate functions. For example, the map is a morphism from to . Two affine varieties are isomorphic if there is a morphism which has an inverse morphism. For example, the affine variety is isomorphic to the cone via the coordinate change .Many polynomials may be factored, for instance , and then . Consequently, only irreducible polynomials, and more generally only prime ideals are used in the definition of a variety. An affine variety is..

The adjacency list representation of a graph consists of lists one for each vertex , , which gives the vertices to which is adjacent. The adjacency lists of a graph may be computed in the Wolfram Language using AdjacencyList[g, #]& /@ VertexList[g]and a graph may be constructed from adjacency lists using Graph[UndirectedEdge @@@ Union[ Sort /@ Flatten[ MapIndexed[{#, #2[[1]]}&, l, {2}], 1]]]

The game of life is the best-known two-dimensional cellular automaton, invented by John H. Conway and popularized in Martin Gardner's Scientific American column starting in October 1970. The game of life was originally played (i.e., successive generations were produced) by hand with counters, but implementation on a computer greatly increased the ease of exploring patterns.The life cellular automaton is run by placing a number of filled cells on a two-dimensional grid. Each generation then switches cells on or off depending on the state of the cells that surround it. The rules are defined as follows. All eight of the cells surrounding the current one are checked to see if they are on or not. Any cells that are on are counted, and this count is then used to determine what will happen to the current cell. 1. Death: if the count is less than 2 or greater than 3, the current cell is switched off. 2. Survival: if (a) the count is exactly 2, or (b) the count..

A two-dimensional binary () totalistic cellular automaton with a von Neumann neighborhood of range . It has a birth rule that at least 2 of its 4 neighbors are alive, and a survival rule that all cells survive. steps of bootstrap percolation on an grid with random initial condition of density can be implemented in the Wolfram Language asWith[{n = 10, p = 0.1, s = 20}, CellularAutomaton[ {1018, {2, {{0, 2, 0}, {2, 1, 2}, {0, 2, 0}}}, {1, 1}}, Table[If[Random[Real] < p, 1, 0], {s}, {s}], n ]]If the initial condition consists of a random sparse arrangement of cells with density , then the system seems to quickly converge to a steady state of rectangular islands of live cells surrounded by a sea of dead cells. However, as crosses some threshold on finite-sized grids, the behavior appears to change so that every cell becomes live. Several examples are shown above on three grids with random initial conditions and different starting densities.However, this..

The tensor product of two vector spaces and , denoted and also called the tensor direct product, is a way of creating a new vector space analogous to multiplication of integers. For instance,(1)In particular,(2)Also, the tensor product obeys a distributive law with the directsum operation:(3)The analogy with an algebra is the motivation behind K-theory. The tensor product of two tensors and can be implemented in the Wolfram Language as: TensorProduct[a_List, b_List] := Outer[List, a, b]Algebraically, the vector space is spanned by elements of the form , and the following rules are satisfied, for any scalar . The definition is the same no matter which scalar field is used.(4)(5)(6)One basic consequence of these formulas is that(7)A vector basis of and of gives a basis for , namely , for all pairs . An arbitrary element of can be written uniquely as , where are scalars. If is dimensional and is dimensional, then has dimension .Using tensor products,..

Lagrange's identity is the algebraic identity(1)(Mitrinović 1970, p. 41; Marsden and Tromba 1981, p. 57; Gradshteyn and Ryzhik 2000, p. 1049).Lagrange's identity is a special case of the Binet-Cauchy identity, and Cauchy's inequality in dimensions follows from it.It can be coded in the Wolfram Languageas follow. LagrangesIdentity[n_] := Module[ {aa = Array[a, n], bb = Array[b, n]}, Total[(aa^2) Plus @@ (bb^2)] == Total[(a[#1]b[#2] - a[#2]b[#1])^2& @@@ Subsets[Range[n], {2}]] + (aa.bb)^2 ]Plugging in gives the and identities (2)(3)(4)A vector quadruple product formula knownas Lagrange's identity given by(5)(Bronshtein and Semendyayev 2004, p. 185).A related identity also known as Lagrange's identity is given by defining and to be -dimensional vectors for , ..., . Then(6)(Greub 1978, p. 155), where denotes a cross product, denotes a dot product, and is the determinant of the matrix..

An integer is a fundamental discriminant if it is not equal to 1, not divisible by any square of any odd prime, and satisfies or . The function FundamentalDiscriminantQ[d] in the Wolfram Language version 5.2 add-on package NumberTheory`NumberTheoryFunctions` tests if an integer is a fundamental discriminant.It can be implemented as: FundamentalDiscriminantQ[n_Integer] := n != 1&& (Mod[n, 4] == 1 \[Or] ! Unequal[Mod[n, 16], 8, 12])&& SquareFreeQ[n/2^IntegerExponent[n, 2]]The first few positive fundamental discriminants are 5, 8, 12, 13, 17, 21, 24, 28, 29, 33, ... (OEIS A003658). Similarly, the first few negative fundamental discriminants are , , , , , , , , , , , ... (OEIS A003657).

The factorization of a number into its constituent primes, also called prime decomposition. Given a positive integer , the prime factorization is writtenwhere the s are the prime factors, each of order . Each factor is called a primary. Prime factorization can be performed in the Wolfram Language using the command FactorInteger[n], which returns a list of pairs.Through his invention of the Pratt certificate, Pratt (1975) became the first to establish that prime factorization lies in the complexity class NP.The following Wolfram Language code can be used to give a nicely typeset form of a number : FactorForm[n_?NumberQ, fac_:Automatic] := Times @@ (HoldForm[Power[##]]& @@@ FactorInteger[n, fac])The first few prime factorizations (the number 1, by definition, has a prime factorization of "1") are given in the following table.prime factorizationprime factorization11111122123313134145515616771717818919191020The..

The power tower of order is defined as(1)where is Knuth up-arrow notation (Knuth 1976), which in turn is defined by(2)together with(3)(4)Rucker (1995, p. 74) uses the notation(5)and refers to this operation as "tetration."A power tower can be implemented in the WolframLanguage as PowerTower[a_, k_Integer] := Nest[Power[a, #]&, 1, k]or PowerTower[a_, k_Integer] := Power @@ Table[a, {k}]The following table gives values of for , 2, ... for small .OEIS1A0000271, 2, 3, 4, 5, 6, 7, 8, 9, 10, ...2A0003121, 4, 27, 256, 3125, 46656, ...3A0024881, 16, 7625597484987, ...41, 65536, ...The following table gives for , 2, ... for small .OEIS1A0000121, 1, 1, 1, 1, 1, ...2A0142212, 4, 16, 65536, , ...3A0142223, 27, 7625597484987, ...44, 256, , ...Consider and let be defined as(6)(Galidakis 2004). Then for , is entire with series expansion:(7)Similarly, for , is analytic for in the domain of the principal branch of , with series expansion:(8)For..

Given a set , the power set of , sometimes also called the powerset, is the set of all subsets of . The order of a power set of a set of order is . Power sets are larger than the sets associated with them. The power set of is variously denoted or .The power set of a given set can be found in the Wolfram Language using Subsets[s].

According to Hardy and Wright (1979), the 44-digit Ferrier's primedetermined to be prime using only a mechanical calculator, is the largest prime found before the days of electronic computers. The Wolfram Language can verify primality of this number in a (small) fraction of a second, showing how far the art of numerical computation has advanced in the intervening years. It can be shown to be a probable prime almost instantaneously In[1]:= FerrierPrime = (2^148 + 1)/17; In[2]:= PrimeQ[FerrierPrime] // Timing Out[2]= {0.01 Second, True}and verified to be an actual prime complete with primalitycertificate almost as quickly In[3]:= <<PrimalityProving` In[4]:= ProvablePrimeQ[FerrierPrime, "Certificate" -> True] // Timing Out[4]= {0.04 Second,{True, {20988936657440586486151264256610222593863921,17, {2,{3,2,{2}},{5,2,{2}},{7,3,{2,{3,2,{2}}}}, {13,2,{2,{3,2,{2}}}},{19, 2,{2,{3,2,{2}}}},{37,2,{2,{3,2,{2}}}},{73,5,{..

The unsorted union of a list is a list containing the same elements as but with the second and subsequent occurrence of any given element removed. For example, the unsorted union of the set is . The unsorted union differs from the usual union in that the elements of the unsorted union are not necessarily ordered.The unsorted union is implemented in the WolframLanguage as DeleteDuplicates[list].It can be implemented in the Wolfram Languagetop-level code as: UnsortedUnion1[x_] := Tally[x][[All, 1]]or UnsortedUnion2[x_] := Reap[Sow[1, x], _, #1&][[2]]or UnsortedUnion3[x_] := Module[{f}, f[y_] := (f[y] = Sequence[]; y); f /@ x ]Depending on the nature of the list to be unioned, different implementations above may be more efficient, although in general, the first is the fastest.

The factorial is defined for a positive integer as(1)So, for example, . An older notation for the factorial was written (Mellin 1909; Lewin 1958, p. 19; Dudeney 1970; Gardner 1978; Conway and Guy 1996).The special case is defined to have value , consistent with the combinatorial interpretation of there being exactly one way to arrange zero objects (i.e., there is a single permutation of zero elements, namely the empty set ).The factorial is implemented in the Wolfram Language as Factorial[n] or n!.The triangular number can be regarded as the additive analog of the factorial . Another relationship between factorials and triangular numbers is given by the identity(2)(K. MacMillan, pers. comm., Jan. 21, 2008).The factorial gives the number of ways in which objects can be permuted. For example, , since the six possible permutations of are , , , , , . The first few factorials for , 1, 2, ... are 1, 1, 2, 6, 24, 120, ... (OEIS A000142).The..

The even part of a positive integer is defined bywhere is the exponent of the exact power of 2 dividing . The values for , 2, ..., are 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, ... (OEIS A006519). The even part function can be implemented in the Wolfram Language as EvenPart[0]:=1 EvenPart[n_Integer]:=2^IntegerExponent[n,2]

The unitary divisor function is the analog of the divisor function for unitary divisors and denotes the sum-of-th-powers-of-the-unitary divisors function. As in the case of the usual divisor function, is commonly written .The numbers of unitary divisors is the same as the numbers of squarefree divisors of , as well as , where is the number of different primes dividing .If is squarefree, then . can be computed using the formulawhich can be computed in the Wolfram Languageas UnitaryDivisorSigma[k_, n_Integer] := Times @@ (1 + (Power @@@ FactorInteger[n])^k)The following table gives for , 2, ... and small .OEIS for , 2, ...0A0344441, 2, 2, 2, 2, 4, 2, 2, 2, 4, 2, 4, 2, 4, 4, 2, 2, 4, 2, 4, ...1A0344481, 3, 4, 5, 6, 12, 8, 9, 10, 18, 12, 20, 14, 24, 24, ...2A0346761, 5, 10, 17, 26, 50, 50, 65, 82, 130, 122, 170, 170, 250, 260, ...3A0346771, 9, 28, 65, 126, 252, 344, 513, 730, 1134, 1332, 1820, ...4A0346781, 17, 82, 257, 626, 1394, 2402, 4097, 6562, 10642, .....

The Euclidean algorithm, also called Euclid's algorithm, is an algorithm for finding the greatest common divisor of two numbers and . The algorithm can also be defined for more general rings than just the integers . There are even principal rings which are not Euclidean but where the equivalent of the Euclidean algorithm can be defined. The algorithm for rational numbers was given in Book VII of Euclid's Elements. The algorithm for reals appeared in Book X, making it the earliest example of an integer relation algorithm (Ferguson et al. 1999).The Euclidean algorithm is an example of a P-problem whose time complexity is bounded by a quadratic function of the length of the input values (Bach and Shallit 1996).Let , then find a number which divides both and (so that and ), then also divides since(1)Similarly, find a number which divides and (so that and ), then divides since(2)Therefore, every common divisor of and is a common divisor of and , so the procedure..

A divisor of for which(1)where is the greatest common divisor. For example, the divisors of 12 are , so the unitary divisors are . A list of unitary divisors of a number an be computed in the Wolfram Language using: UnitaryDivisors[n_Integer] := Sort[Flatten[Outer[ Times, Sequence @@ ({1, #}& /@ Power @@@ FactorInteger[n]) ]]]The following table gives the unitary divisors for the first few integers (OEIS A077610).1121, 231, 341, 451, 561, 2, 3, 671, 781, 891, 9101, 2, 5, 10111, 11121, 3, 4, 12131, 13141, 2, 7, 14151, 3, 5, 15Given the prime factorization(2)then(3)is a unitary divisor of if each is 0 or . For a prime power , the unitary divisors are 1 and (Cohen 1990).The symbol is used to denote to the unitary divisor function.The numbers of unitary divisors of , 2, ... are 1, 2, 2, 2, 2, 4, 2, 2, 2, 4, 2, 4, 2, 4, 4, 2, 2, 4, 2, 4, ... (OEIS A034444). These numbers are also the numbers of squarefree divisors of . The number of unitary divisors of is also given..

A truth table is a two-dimensional array with columns. The first columns correspond to the possible values of inputs, and the last column to the operation being performed. The rows list all possible combinations of inputs together with the corresponding outputs. For example, the following truth table shows the result of the binary AND operator acting on two inputs and , each of which may be true or false.FFFFTFTFFTTTThe following Wolfram Language code can be used to generate a truth table for n levels of operator op. TruthTable[op_, n_] := Module[ { l = Flatten[Outer[List, Sequence @@ Table[{True, False}, {n}]], n - 1], a = Array[A, n] }, DisplayForm[ GridBox[Prepend[Append[#, op @@ #]& /@ l, Append[a, op @@ a]], RowLines -> True, ColumnLines -> True] ] ]

A dot plot, also called a dot chart, is a type of simple histogram-like chart used in statistics for relatively small data sets where values fall into a number of discrete bins. To draw a dot plot, count the number of data points falling in each bin and draw a stack of dots that number high for each bin. The illustration above shows such a plot for a random sample of 100 integers chosen between 1 and 25 inclusively.Simple code for drawing a dot plot in the WolframLanguage with some appropriate labeling of bin heights can be given asDotPlot[data_] := Module[{m = Tally[Sort[data]]}, ListPlot[Flatten[Table[{#1, n}, {n, #2}]& @@@ m, 1], Ticks -> {Automatic, Range[0, Max[m[[All, 2]]]]}]]

If, for and integers, the ratio is itself an integer, then is said to divide . This relationship is written , read " divides ." In this case, is also said to be divisible by and is called a divisor of .Clearly, and . By convention, for every except 0 (Hardy and Wright 1979, p. 1).The function can be implemented in the Wolfram Language as Divides[a_, b_] := Mod[b, a] == 0The function Divisible[n, d] returns True if an integer is divisible by an integer .

A digit sum is a sum of the base- digits of , which can be implemented in the Wolfram Language as DigitSum[n_, b_:10] := Total[IntegerDigits[n, b]]The following table gives for , 2, ... and small .OEIS for , 2, ...2A0001201, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, ...3A0537351, 2, 1, 2, 3, 2, 3, 4, 1, 2, 3, 2, 3, 4, 3, ...4A0537371, 2, 3, 1, 2, 3, 4, 2, 3, 4, 5, 3, 4, 5, 6, ...5A0538241, 2, 3, 4, 1, 2, 3, 4, 5, 2, 3, 4, 5, 6, 3, ...6A0538271, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 2, 3, 4, 5, ...7A0538281, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, 7, 2, 3, ...8A0538291, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 8, ...9A0538301, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, ...10A0079531, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, ...Plotting versus and gives the plot shown above.The digits sum satisfies the congruence(1)In base 10, this congruence is the basis of casting out nines and of fast divisibility tests such as those for 3 and 9. satisfies the following unexpected identity(2)the case of which was given in the 1981 Putnam competition..

Abstractly, the tensor direct product is the same as the vector space tensor product. However, it reflects an approach toward calculation using coordinates, and indices in particular. The notion of tensor product is more algebraic, intrinsic, and abstract. For instance, up to isomorphism, the tensor product is commutative because . Note this does not mean that the tensor product is symmetric.For two first-tensor rank tensors (i.e., vectors), the tensor direct product is defined as(1)which is a second-tensor rank tensor. The tensor contraction of a direct product of first-tensor rank tensors is the scalar(2)For second-tensor rank tensors,(3)(4)In general, the direct product of two tensors is a tensor of rank equal to the sum of the two initial ranks. The direct product is associative, but not commutative.The tensor direct product of two tensors and can be implemented in the Wolfram Language as TensorDirectProduct[a_List, b_List] :=..

Deck transformations, also called covering transformations, are defined for any cover . They act on by homeomorphisms which preserve the projection . Deck transformations can be defined by lifting paths from a space to its universal cover , which is a simply connected space and is a cover of . Every loop in , say a function on the unit interval with , lifts to a path , which only depends on the choice of , i.e., the starting point in the preimage of . Moreover, the endpoint depends only on the homotopy class of and . Given a point , and , a member of the fundamental group of , a point is defined to be the endpoint of a lift of a path which represents .The deck transformations of a universal cover form a group , which is the fundamental group of the quotient spaceFor example, when is the square torus then is the plane and the preimage is a translation of the integer lattice . Any loop in the torus lifts to a path in the plane, with the endpoints lying in the integer lattice...

The negadecimal representation of a number is its representation in base (i.e., base negative 10). It is therefore given by the coefficients in(1)(2)where , 1, ..., 9.The negadecimal digits may be obtained with the WolframLanguage code Negadecimal[0] := {0} Negadecimal[i_] := Rest @ Reverse @ Mod[NestWhileList[(# - Mod[#, 10])/-10&, i, # != 0& ], 10]The following table gives the negadecimal representations for the first few integers(A039723).negadecimalnegadecimalnegadecimal111119121181221219222182331319323183441419424184551519525185661619626186771719727187881819828188991919929189101902018030170The numbers having the same decimal and negadecimal representations are those which are sums of distinct powers of 100: 1, 2, 3, 4, 5, 6, 7, 8, 9, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 200, ... (OEIS A051022)...

The cubefree part is that part of a positive integer left after all cubic factors are divided out. For example, the cubefree part of is 3. For , 2, ..., the first few are 1, 2, 3, 4, 5, 6, 7, 1, 9, 10, 11, 12, 13, 14, 15, 2, ... (OEIS A050985).The sequence of cubefree parts of positive integers has Dirichletgenerating functionwhere is the Riemann zeta function.The cubefree part function can be implemented in the WolframLanguage as: CubefreePart[n_Integer?Positive] := Times @@ Power @@@ ({#[[1]], Mod[#[[2]], 3]}& /@ FactorInteger[n])

The set of elements belonging to one but not both of two given sets. It is therefore the union of the complement of with respect to and with respect to , and corresponds to the XOR operation in Boolean logic. The symmetric difference can be implemented in the Wolfram Language as: SymmetricDifference[a_, b_] := Union[Complement[a, b], Complement[b, a]]The symmetric difference of sets and is variously written as , , (Borowski and Borwein 1991) or (Harris and Stocker 1998, p. 3). All but the first notation should probably be deprecated since each of the other symbols has a common meaning in other areas of mathematics.For example, for and , , since 2, 3, and 5 are each in one, but not both, sets.

The negabinary representation of a number is its representation in base (i.e., base negative 2). It is therefore given by the coefficients in(1)(2)where .Conversion of to negabinary can be done using the Wolfram Language code Negabinary[n_Integer] := Module[ {t = (2/3)(4^Floor[Log[4, Abs[n] + 1] + 2] - 1)}, IntegerDigits[BitXor[n + t, t], 2] ]due to D. Librik (Szudzik). The bitwise XOR portion is originally due to Schroeppel (1972), who noted that the sequence of bits in is given by .The following table gives the negabinary representations for the first few integers(OEIS A039724).negabinarynegabinary11111111121101211100311113111014100141001051011510011611010161000071101117100018110001810110911001191011110111102010100If these numbers are interpreted as binary numbers and converted to decimal, their values are 1, 6, 7, 4, 5, 26, 27, 24, 25, 30, 31, 28, 29, 18, 19, 16, ... (OEIS A005351). The numbers having the same..

A statement is in conjunctive normal form if it is a conjunction (sequence of ANDs) consisting of one or more conjuncts, each of which is a disjunction (OR) of one or more literals (i.e., statement letters and negations of statement letters; Mendelson 1997, p. 30). Examples of conjunctive normal forms include (1)(2)(3)(4)where denotes OR, denotes AND, and denotes NOT (Mendelson 1997, p. 30).Every statement in logic consisting of a combination of multiple , , and s can be written in conjunctive normal form.An expression can be put in conjunctive normal form using the WolframLanguage using the following code: ConjunctiveNormalForm[f_] := Not[LogicalExpand[Not[f]]] //. { Not[a_Or] :> And @@ (Not /@ List @@ a), Not[a_And] :> Or @@ (Not /@ List @@ a) }

A number such thatwhere is the divisor function is called a superperfect number. Even superperfect numbers are just , where is a Mersenne prime. If any odd superperfect numbers exist, they are square numbers and either or is divisible by at least three distinct primes.More generally, an -superperfect (or (, 2)-superperfect) number is a number for which , and an -perfect number is a number for which . A number can be tested to see if it is -perfect using the following Wolfram Language code: SuperperfectQ[m_, n_, k_:2] := Nest[DivisorSigma[1, #]&, n, m] == k nThe first few (2, 2)-perfect numbers are 2, 4, 16, 64, 4096, 65536, 262144, ... (OEIS A019279; Cohen and te Riele 1996). For , there are no even -superperfect numbers (Guy 1994, p. 65). On the basis of computer searches, J. McCranie has shown that there are no -superperfect numbers less than for any (McCranie, pers. comm., Nov. 11, 2001). McCranie further believes that there..

If two numbers and have the property that their difference is integrally divisible by a number (i.e., is an integer), then and are said to be "congruent modulo ." The number is called the modulus, and the statement " is congruent to (modulo )" is written mathematically as(1)If is not integrally divisible by , then it is said that " is not congruent to (modulo )," which is written(2)The explicit "(mod )" is sometimes omitted when the modulus is understood by context, so in such cases, care must be taken not to confuse the symbol with the equivalence sign.The quantity is sometimes called the "base," and the quantity is called the residue or remainder. There are several types of residues. The common residue defined to be nonnegative and smaller than , while the minimal residue is or , whichever is smaller in absolute value.Congruence arithmetic is perhaps most familiar as a generalization of the..

The number of representations of by squares, allowing zeros and distinguishing signs and order, is denoted . The special case corresponding to two squares is often denoted simply (e.g., Hardy and Wright 1979, p. 241; Shanks 1993, p. 162).For example, consider the number of ways of representing 5 as the sum of two squares:(1)(2)(3)(4)(5)(6)(7)(8)so . Similarly,(9)(10)(11)(12)(13)(14)so .The Wolfram Language function SquaresR[k, n] gives . In contrast, the function PowersRepresentations[n, k, 2] gives a list of unordered unsigned representations of as a list of squares, e.g., giving the as the only "unique" representation of 5.The function is intimately connected with the Leibniz series and with Gauss's circle problem (Hilbert and Cohn-Vossen 1999, pp. 27-39). It is also given by the inverse Möbius transform of the sequence and (Sloane and Plouffe 1995, p. 22). The average order of is , but..

The longest increasing (contiguous) subsequence of a given sequence is the subsequence of increasing terms containing the largest number of elements. For example, the longest increasing subsequence of the permutation is .It can be coded in the Wolfram Languageas follows. <<Combintorica` LongestContinguousIncreasingSubsequence[p_] := Last[ Split[Sort[Runs[p]], Length[#1] >= Length[#2]&] ]

A composite number is a positive integer which is not prime (i.e., which has factors other than 1 and itself). The first few composite numbers (sometimes called "composites" for short) are 4, 6, 8, 9, 10, 12, 14, 15, 16, ... (OEIS A002808), whose prime decompositions are summarized in the following table. Note that the number 1 is a special case which is considered to be neither composite nor prime.prime factorizationprime factorization420621822924102512261427152816301832The th composite number can be generated using the Wolfram Language code Composite[n_Integer] := FixedPoint[n + PrimePi[#] + 1&, n]The Dirichlet generating function of the characteristic function of the composite numbers is given by(1)(2)(3)where is the Riemann zeta function, is the prime zeta function, and is an Iverson bracket. There are an infinite number of composite numbers.The composite number problem asks if there exist positive integers..

The process of finding a reduced set of basis vectors for a given lattice having certain special properties. Lattice reduction algorithms are used in a number of modern number theoretical applications, including in the discovery of a spigot algorithm for pi. Although determining the shortest basis is possibly an NP-complete problem, algorithms such as the LLL algorithm can find a short basis in polynomial time with guaranteed worst-case performance.The LLL algorithm of lattice reduction is implemented in the Wolfram Language using the function LatticeReduce. RootApproximant[x, n] also calls this routine in order to find a algebraic number of degree at most such that is an approximate zero of the number.When used to find integer relations, a typical input to the algorithm consists of an augmented identity matrix with the entries in the last column consisting of the elements (multiplied by a large positive constant to penalize vectors that..

The multiplicative suborder of a number (mod ) is the least exponent such that (mod ), or zero if no such exists. An always exists if and .This function is denoted and can be implemented in the Wolfram Language as: Suborder[a_,n_] := If[n>1&& GCD[a,n] == 1, Min[MultiplicativeOrder[a, n, {-1, 1}]], 0 ]The following table summarizes for small values of and .OEIS for , 1, ...20, 0, 0, 1, 0, 2, 0, 3, 0, 3, 0, 5, 0, 6, 0, ...3A1034890, 0, 1, 0, 1, 2, 0, 3, 2, 0, 2, 5, 0, 3, 3, ...40, 0, 0, 1, 0, 1, 0, 3, 0, 3, 0, 5, 0, 3, 0, ...5A1034910, 0, 1, 1, 1, 0, 1, 3, 2, 3, 0, 5, 2, 2, 3, ...

Given a factor of a number , the cofactor of is .A different type of cofactor, sometimes called a cofactor matrix, is a signed version of a minor defined byand used in the computation of the determinant of a matrix according toThe cofactor can be computed in the WolframLanguage using Cofactor[m_List?MatrixQ, {i_Integer, j_Integer}] := (-1)^(i+j) Det[Drop[Transpose[ Drop[Transpose[m], {j}]], {i} ]]which is the equivalent of the th component of the CofactorMatrix defined below. MinorMatrix[m_List?MatrixQ] := Map[Reverse, Minors[m], {0, 1}] CofactorMatrix[m_List?MatrixQ] := MapIndexed[#1 (-1)^(Plus @@ #2)&, MinorMatrix[m],{2}]Cofactors can be computed using Cofactor[m, i, j] in the Wolfram Language package Combinatorica` .

That part of a positive integer left after all square factors are divided out. For example, the squarefree part of is 6, since . For , 2, ..., the first few are 1, 2, 3, 1, 5, 6, 7, 2, 1, 10, ... (OEIS A007913). The squarefree part function can be implemented in the Wolfram Language as SquarefreePart[n_Integer?Positive] := Times @@ Power @@@ ({#[[1]], Mod[#[[2]], 2]}& /@ FactorInteger[n])

The word "base" in mathematics is used to refer to a particular mathematical object that is used as a building block. The most common uses are the related concepts of the number system whose digits are used to represent numbers and the number system in which logarithms are defined. It can also be used to refer to the bottom edge or surface of a geometric figure.A real number can be represented using any integer number as a base (sometimes also called a radix or scale). The choice of a base yields to a representation of numbers known as a number system. In base , the digits 0, 1, ..., are used (where, by convention, for bases larger than 10, the symbols A, B, C, ... are generally used as symbols representing the decimal numbers 10, 11, 12, ...).The digits of a number in base (for integer ) can be obtained in the Wolfram Language using IntegerDigits[x, b].Let the base representation of a number be written(1)(e.g., ). Then, for example, the number 10 is..

Series reversion is the computation of the coefficients of the inverse function given those of the forward function. For a function expressed in a series with no constant term (i.e., ) as(1)the series expansion of the inverse series is given by(2)By plugging (2) into (1), the following equationis obtained(3)Equating coefficients then gives(4)(5)(6)(7)(8)(9)(10)(Dwight 1961, Abramowitz and Stegun 1972, p. 16).Series reversion is implemented in the Wolfram Language as InverseSeries[s, x], where is given as a SeriesData object. For example, to obtain the terms shown above, With[{n = 7}, CoefficientList[ InverseSeries[SeriesData[x, 0, Array[a, n], 1, n + 1, 1]], x] ]A derivation of the explicit formula for the th term is given by Morse and Feshbach (1953),(11)where(12)

A root-finding algorithm which assumes a function to be approximately linear in the region of interest. Each improvement is taken as the point where the approximating line crosses the axis. The secant method retains only the most recent estimate, so the root does not necessarily remain bracketed. The secant method is implemented in the Wolfram Language as the undocumented option Method -> Secant in FindRoot[eqn, x, x0, x1].When the algorithm does converge, its order of convergenceis(1)where is a constant and is the golden ratio.(2)(3)(4)so(5)The secant method can be implemented in the WolframLanguage as SecantMethodList[f_, {x_, x0_, x1_}, n_] := NestList[Last[#] - {0, (Function[x, f][Last[#]]* Subtract @@ #)/Subtract @@ Function[x, f] /@ #}&, {x0, x1}, n]

The logical axiomwhere denotes NOT and denotes OR, that, when taken together with associativity and commutativity, is equivalent to the axioms of Boolean algebra.The Robbins operator can be defined in the WolframLanguage by Robbins := Function[{x, y}, ! (! (! y \[Or] x) \[Or] ! (x \[Or] y))]That the Robbins axiom is a true statement in Booleanalgebra can be verified by examining its truth table.TTTTFTFTFFFF

Newton's method, also called the Newton-Raphson method, is a root-finding algorithm that uses the first few terms of the Taylor series of a function in the vicinity of a suspected root. Newton's method is sometimes also known as Newton's iteration, although in this work the latter term is reserved to the application of Newton's method for computing square roots.For a polynomial, Newton's method is essentially the same as Horner's method.The Taylor series of about the point is given by(1)Keeping terms only to first order,(2)Equation (2) is the equation of the tangent line to the curve at , so is the place where that tangent line intersects the -axis. A graph can therefore give a good intuitive idea of why Newton's method works at a well-chosen starting point and why it might diverge with a poorly-chosen starting point.This expression above can be used to estimate the amount of offset needed to land closer to the root starting from an initial guess..

The wedge product is the product in an exterior algebra. If and are differential k-forms of degrees and , respectively, then(1)It is not (in general) commutative, but it is associative,(2)and bilinear(3)(4)(Spivak 1999, p. 203), where and are constants. The exterior algebra is generated by elements of degree one, and so the wedge product can be defined using a basis for :(5)when the indices are distinct, and the product is zero otherwise.While the formula holds when has degree one, it does not hold in general. For example, consider :(6)(7)(8)If have degree one, then they are linearly independent iff .The wedge product is the "correct" type of product to use in computinga volume element(9)The wedge product can therefore be used to calculate determinants and volumes of parallelepipeds. For example, write where are the columns of . Then(10)and is the volume of the parallelepiped spanned by ...

An matrix whose rows are composed of cyclically shifted versions of a length- list . For example, the circulant matrix on the list is given by(1)Circulant matrices are very useful in digital image processing, and the circulant matrix is implemented as CirculantMatrix[l, n] in the Mathematica application package Digital Image Processing.Circulant matrices can be implemented in the WolframLanguage as follows. CirculantMatrix[l_List?VectorQ] := NestList[RotateRight, RotateRight[l], Length[l] - 1] CirculantMatrix[l_List?VectorQ, n_Integer] := NestList[RotateRight, RotateRight[Join[Table[0, {n - Length[l]}], l]], n - 1] /; n >= Length[l]where the first input creates a matrix with dimensions equal to the length of and the second pads with zeros to give an matrix. A special type of circulant matrix is defined as(2)where is a binomial coefficient. The determinant of is given by the beautiful formula(3)where , , ..., are the th..

A Pólya plot is a plot of the vector field of of a complex function . Several examples are shown above.Pólya plots can be created in the WolframLanguage using the following code: PolyaFieldPlot[f_, {x_, xmin_, xmax_}, {y_, ymin_, ymax_}, opts : OptionsPattern[]] := VectorPlot[Evaluate @ {Re[f], -Im[f]}, {x, xmin, xmax}, {y, ymin, ymax}, VectorScale -> {Automatic, Automatic, Log[#5 + 1]&}, opts ]

A cumulative product is a sequence of partial products of a given sequence. For example, the cumulative products of the sequence , are , , , .... Cumulative products can be implemented in the Wolfram Language as Rest[FoldList[Times, 1, list]]

A Gröbner basis for a system of polynomials is an equivalence system that possesses useful properties, for example, that another polynomial is a combination of those in iff the remainder of with respect to is 0. (Here, the division algorithm requires an order of a certain type on the monomials.) Furthermore, the set of polynomials in a Gröbner basis have the same collection of roots as the original polynomials. For linear functions in any number of variables, a Gröbner basis is equivalent to Gaussian elimination.The algorithm for computing Gröbner bases is known as Buchberger's algorithm. Calculating a Gröbner basis is typically a very time-consuming process for large polynomial systems (Trott 2006, p. 37).Gröbner bases are pervasive in the construction of symbolic algebra algorithms, and Gröbner bases with respect to lexicographic order are very useful for solving equations and for elimination..

For two polynomials and of degrees and , respectively, the Sylvester matrix is an matrix formed by filling the matrix beginning with the upper left corner with the coefficients of , then shifting down one row and one column to the right and filling in the coefficients starting there until they hit the right side. The process is then repeated for the coefficients of .The Sylvester matrix can be implemented in the WolframLanguage as: SylvesterMatrix1[poly1_, poly2_, var_] := Function[{coeffs1, coeffs2}, With[ {l1 = Length[coeffs1], l2 = Length[coeffs2]}, Join[ NestList[RotateRight, PadRight[coeffs1, l1 + l2 - 2], l2 - 2], NestList[RotateRight, PadRight[coeffs2, l1 + l2 - 2], l1 - 2] ] ] ][ Reverse[CoefficientList[poly1, var]], Reverse[CoefficientList[poly2, var]] ]For example, the Sylvester matrix for and isThe determinant of the Sylvester matrix of two polynomialsis the resultant of the polynomials.SylvesterMatrix is an (undocumented)..

A polynomial is a mathematical expression involving a sum of powers in one or more variables multiplied by coefficients. A polynomial in one variable (i.e., a univariate polynomial) with constant coefficients is given by(1)The individual summands with the coefficients (usually) included are called monomials (Becker and Weispfenning 1993, p. 191), whereas the products of the form in the multivariate case, i.e., with the coefficients omitted, are called terms (Becker and Weispfenning 1993, p. 188). However, the term "monomial" is sometimes also used to mean polynomial summands without their coefficients, and in some older works, the definitions of monomial and term are reversed. Care is therefore needed in attempting to distinguish these conflicting usages.The highest power in a univariate polynomial is calledits order, or sometimes its degree.Any polynomial with can be expressed as(2)where the product..

A real polynomial is said to be stable if all its roots lie in the left half-plane. The term "stable" is used to describe such a polynomial because, in the theory of linear servomechanisms, a system exhibits unforced time-dependent motion of the form , where is the root of a certain real polynomial . A system is therefore mechanically stable iff is a stable polynomial.The polynomial is stable iff , and the irreducible polynomial is stable iff both and are greater than zero. The Routh-Hurwitz theorem can be used to determine if a polynomial is stable.Given two real polynomials and , if and are stable, then so is their product , and vice versa (Séroul 2000, p. 280). It therefore follows that the coefficients of stable real polynomials are either all positive or all negative (although this is not a sufficient condition, as shown with the counterexample ). Furthermore, the values of a stable polynomial are never zero for and have..

Check the price

for your project

for your project

we accept

Money back

guarantee

guarantee

Price calculator

We've got the best prices, check out yourself!