 # Tag

Sort by:

### Minor

A minor is the reduced determinant of a determinant expansion that is formed by omitting the th row and th column of a matrix . So, for example, the minor of the above matrix is given byThe th minor can be computed in the Wolfram Language using Minor[m_List?MatrixQ, {i_Integer, j_Integer}] := Det[Drop[Transpose[Drop[Transpose[m], {j}]], {i}]]The Wolfram Language's built-in Minors[m] command instead gives the minors of a matrix obtained by deleting the st row and st column of , while Minors[m, k] gives the th minors of . The Minor code above therefore corresponds to th entry of MinorMatrix[m_List?MatrixQ] := Map[Reverse, Minors[m], {0, 1}]i.e., the definition Minors[m, i, j] is equivalent to MinorMatrix[m][[i, j]].

### Jacobian

Given a set of equations in variables , ..., , written explicitly as(1)or more explicitly as(2)the Jacobian matrix, sometimes simply called "the Jacobian" (Simon and Blume 1994) is defined by(3)The determinant of is the Jacobian determinant (confusingly, often called "the Jacobian" as well) and is denoted(4)The Jacobian matrix and determinant can be computed in the WolframLanguage using JacobianMatrix[f_List?VectorQ, x_List] := Outer[D, f, x] /; Equal @@ (Dimensions /@ {f, x}) JacobianDeterminant[f_List?VectorQ, x_List] := Det[JacobianMatrix[f, x]] /; Equal @@ (Dimensions /@ {f, x})Taking the differential(5)shows that is the determinant of the matrix , and therefore gives the ratios of -dimensional volumes (contents) in and ,(6)It therefore appears, for example, in the changeof variables theorem.The concept of the Jacobian can also be applied to functions in more than variables. For example, considering..

### Jacobi's theorem

Let be an -rowed minor of the th order determinant associated with an matrix in which the rows , , ..., are represented with columns , , ..., . Define the complementary minor to as the -rowed minor obtained from by deleting all the rows and columns associated with and the signed complementary minor to to be(1)Let the matrix of cofactors be given by(2)with and the corresponding -rowed minors of and , then it is true that(3)

### Determinant theorem

Given a square matrix , the following are equivalent: 1. . 2. The columns of are linearly independent. 3. The rows of are linearly independent. 4. Range() = . 5. Null() = . 6. has a matrix inverse.

### Jacobi's determinant identity

Let(1)(2)where and are matrices. Then(3)The proof follows from equating determinants on the two sides of the block matrices(4)where is the identity matrix and is the zero matrix.

### Determinant identities

A useful determinant identity allows the following determinant to be expressed using vector operations,(1)Additional interesting determinant identities include(2)(Muir 1960, p. 39),(3)(Muir 1960, p. 41),(4)(Muir 1960, p. 42),(5)(Muir 1960, p. 47),(6)(Muir 1960, p. 42),(7)(Muir 1960, p. 44), and the Cayley-Mengerdeterminant(8)(Muir 1960, p. 46), which is closely related to Heron'sformula.

### Vandermonde determinant

(1)(2)(Sharpe 1987). For integers , ..., , is divisible by (Chapman 1996), the first few values of which are the superfactorials 1, 1, 2, 12, 288, 34560, 24883200, 125411328000, ... (OEIS A000178).

### Determinant expansion by minors

Also known as "Laplacian" determinant expansion by minors, expansion by minors is a technique for computing the determinant of a given square matrix . Although efficient for small matrices, techniques such as Gaussian elimination are much more efficient when the matrix size becomes large.Let denote the determinant of a matrix , then(1)where is a so-called minor of , obtained by taking the determinant of with row and column "crossed out."For example, for a matrix, the above formula gives(2)The procedure can then be iteratively applied to calculate the minors in terms of subminors, etc. The factor is sometimes absorbed into the minor as(3)in which case is called a cofactor.The equation for the determinant can also be formallywritten as(4)where ranges over all permutations of and is the inversion number of (Bressoud and Propp 1999)...

### Sylvester's determinant identity

Given a matrix , let denote its determinant. Then(1)where is the submatrix of formed by the intersection of the subset of columns and of rows. Bareiss (1968) writes the identity as(2)where(3)for .When , this identity gives the Chió pivotal condensation method.

### Hyperdeterminant

A technically defined extension of the ordinary determinant to "higher dimensional" hypermatrices. Cayley (1845) originally coined the term, but subsequently used it to refer to an algebraic invariant of a multilinear form. The hyperdeterminant of the hypermatrix (for ) is given by(1)The above hyperdeterminant vanishes iff the following systemof equations in six unknowns has a nontrivial solution,(2)(3)(4)(5)(6)(7)Glynn (1998) has found the only known multiplicative hyperdeterminant in dimension larger than two.

### Determinant

Determinants are mathematical objects that are very useful in the analysis and solution of systems of linear equations. As shown by Cramer's rule, a nonhomogeneous system of linear equations has a unique solution iff the determinant of the system's matrix is nonzero (i.e., the matrix is nonsingular). For example, eliminating , , and from the equations(1)(2)(3)gives the expression(4)which is called the determinant for this system of equation. Determinants are definedonly for square matrices.If the determinant of a matrix is 0, the matrix is said to be singular, and if the determinant is 1, the matrix is said to be unimodular.The determinant of a matrix ,(5)is commonly denoted , , or in component notation as , , or (Muir 1960, p. 17). Note that the notation may be more convenient when indicating the absolute value of a determinant, i.e., instead of . The determinant is implemented in the Wolfram Language as Det[m].A determinant is defined..

### St&auml;ckel determinant

A determinant used to determine in which coordinate systems the Helmholtz differential equation is separable (Morse and Feshbach 1953). A determinant(1)in which are functions of alone is called a Stäckel determinant. A coordinate system is separable if it obeys the Robertson condition, namely that the scale factors in the Laplacian(2)can be rewritten in terms of functions defined by(3)such that can be written(4)When this is true, the separated equations are of the form(5)The s obey the minor equations(6)(7)(8)which are equivalent to(9)(10)(11)(Morse and Feshbach 1953, p. 509). This gives a total of four equations in nine unknowns. Morse and Feshbach (1953, pp. 655-666) give not only the Stäckel determinants for common coordinate systems, but also the elements of the determinant (although it is not clear how these are derived)...

### Hill determinant

A determinant which arises in the solution of the second-order ordinary differential equation(1)Writing the solution as a power series(2)gives a recurrence relation(3)The value of can be computed using the Hill determinant(4)where(5)(6)(7)and is the variable to solve for. The determinant can be given explicitly by the amazing formula(8)where(9)leading to the implicit equation for ,(10)

### Condensation

A method of computing the determinant of a square matrix due to Charles Dodgson (1866) (who is more famous under his pseudonym Lewis Carroll). The method is useful for hand calculations because, for an integer matrix, all entries in submatrices computed along the way must also be integers. The method is also implemented efficiently in a parallel computation. Condensation is also known as the method of contractants (Macmillan 1955, Lotkin 1959).Given an matrix, condensation successively computes an matrix, an matrix, etc., until arriving at a matrix whose only entry ends up being the determinant of the original matrix. To compute the matrix (), take the connected subdeterminants of the matrix and divide them by the central entries of the matrix, with no divisions performed for . The matrices arrived at in this manner are the matrices of determinants of the connected submatrices of the original matrices.For example, the first condensation..

### Schweins's theorem

If we expand the determinant of a matrix using determinant expansion by minors, first in terms of the minors of order formed from any rows, with their complementaries, and second in terms of the minors of order formed from any columns (), with their complementaries; then the sum of the terms of the second expansion which have in common the elements in the intersection of the selected rows and columns is equal to the sum of the terms of the first expansion which have for one factor the minors of the th order formed from the elements in the intersection of the selected rows and columns.

### Hessian

The Jacobian of the derivatives , , ..., of a function with respect to , , ..., is called the Hessian (or Hessian matrix) of , i.e.,As in the case of the Jacobian, the term "Hessian" unfortunately appears to be used both to refer to this matrix and to the determinant of this matrix (Gradshteyn and Ryzhik 2000, p. 1069).In the second derivative test for determining extrema of a function , the discriminant is given byThe Hessian can be implemented in the WolframLanguage as HessianH[f_, x_List?VectorQ] := D[f, {x, 2}]

### Circulant determinant

Gradshteyn and Ryzhik (2000) define the circulant determinant by(1)where is the th root of unity. The second-order circulant determinant is(2)and the third order is(3)where and are the complex cube roots of unity.The eigenvalues of the corresponding circulant matrix are(4)

Let be an determinant with complex (or real) elements , then if

### Chi&oacute; pivotal condensation

Chió pivotal condensation is a method for evaluating an determinant in terms of determinants. It also leads to some remarkable determinant identities (Eves 1996, p. 130). Chiío's pivotal condensation is a special case of Sylvester's determinant identity.Chió's condensation is carried out on an matrix with by forming the matrix such that(1)Then(2)Explicitly,(3)(Eves 1996, pp. 129-134).

Hadamard's maximum determinant problem asks to find the largest possible determinant (in absolute value) for any matrix whose elements are taken from some set. Hadamard (1893) proved that the determinant of any complex matrix with entries in the closed unit disk satisfies(1)with equality attained by the Vandermonde matrix of the roots of unity (Faddeev and Sominskii 1965, p. 331; Brenner and Cummings 1972). The first few values for for , 2, ... are 1, 2, , 16, , 216, ..., and the squares of these are 1, 4, 27, 256, 3125, ... (OEIS A000312).A (-1,1)-matrix having a maximal determinant is known as a Hadamard matrix (Brenner and Cummings 1972). The same bound of applies to such matrices, and sharper bounds are known when the size of the matrix is not a multiple of 4. A summary of what is known about such bounds is given by Orrick and Solomon.For a (0,1)-matrix, Hadamard's bound can be improvedto(2)(Faddeev and Sominskii 1965, problem 523; Brenner..

### Gram's inequality

Let , ..., be real integrable functions over the closed interval , then the determinant of their integrals satisfies

### Cauchy's determinant theorem

Any row and column of a determinant being selected, if the element common to them be multiplied by its cofactor in the determinant, and every product of another element of the row by another element of the columns be multiplied by its cofactor, the sum of the results is equal to the given determinant. Symbolically,(1)(2)where , 2, ..., ; ; ; and the sign before is determined by the formula , with the total number of permutation inversions in the suffix and .

### Cofactor

Given a factor of a number , the cofactor of is .A different type of cofactor, sometimes called a cofactor matrix, is a signed version of a minor defined byand used in the computation of the determinant of a matrix according toThe cofactor can be computed in the WolframLanguage using Cofactor[m_List?MatrixQ, {i_Integer, j_Integer}] := (-1)^(i+j) Det[Drop[Transpose[ Drop[Transpose[m], {j}]], {i} ]]which is the equivalent of the th component of the CofactorMatrix defined below. MinorMatrix[m_List?MatrixQ] := Map[Reverse, Minors[m], {0, 1}] CofactorMatrix[m_List?MatrixQ] := MapIndexed[1 (-1)^(Plus @@ 2)&, MinorMatrix[m],{2}]Cofactors can be computed using Cofactor[m, i, j] in the Wolfram Language package Combinatorica` .

### Casoratian

The Casoratian of sequences , , ..., is defined by the determinantThe Casoratian is implemented in the Wolfram Language as Casoratian[y1, y2, ..., n].The solutions , , ..., of the linear difference equationfor , 1, ..., are linearly independent sequences iff their Casoratian is nonzero for (Zwillinger 1995).

Check the price