Linear algebra

Math Topics A - Z listing

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Linear algebra Topics

Sort by:

Infinitesimal matrix change

Let , , and be square matrices with small, and define(1)where is the identity matrix. Then the inverse of is approximately(2)This can be seen by multiplying(3)(4)(5)(6)Note that if we instead let , and look for an inverse of the form , we obtain(7)(8)(9)(10)In order to eliminate the term, we require . However, then , so so there can be no inverse of this form.The exact inverse of can be found as follows.(11)so(12)Using a general matrix inverse identity then gives(13)

Matrix power

The power of a matrix for a nonnegative integer is defined as the matrix product of copies of ,A matrix to the zeroth power is defined to be the identity matrix of the same dimensions, . The matrix inverse is commonly denoted , which should not be interpreted to mean .

Strassen formulas

The usual number of scalar operations (i.e., the total number of additions and multiplications) required to perform matrix multiplication is(1)(i.e., multiplications and additions). However, Strassen (1969) discovered how to multiply two matrices in(2)scalar operations, where is the logarithm to base 2, which is less than for . For a power of two (), the two parts of (2) can be written(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)so (◇) becomes(13)Two matrices can therefore be multiplied(14)(15)with only(16)scalar operations (as it turns out, seven of them are multiplications and 18 are additions). Define the seven products (involving a total of 10 additions) as(17)(18)(19)(20)(21)(22)(23)Then the matrix product is given using the remaining eight additions as(24)(25)(26)(27)(Strassen 1969, Press et al. 1989).Matrix inversion of a matrix to yield can also be done in fewer operations than expected using the formulas(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(Strassen..

Matrix multiplication

The product of two matrices and is defined as(1)where is summed over for all possible values of and and the notation above uses the Einstein summation convention. The implied summation over repeated indices without the presence of an explicit sum sign is called Einstein summation, and is commonly used in both matrix and tensor analysis. Therefore, in order for matrix multiplication to be defined, the dimensions of the matrices must satisfy(2)where denotes a matrix with rows and columns. Writing out the product explicitly,(3)where(4)(5)(6)(7)(8)(9)(10)(11)(12)Matrix multiplication is associative, as can be seenby taking(13)where Einstein summation is again used. Now, since , , and are scalars, use the associativity of scalar multiplication to write(14)Since this is true for all and , it must be true that(15)That is, matrix multiplication is associative. Equation(13) can therefore be written(16)without ambiguity. Due to associativity,..

Hermitian part

Every complex matrix can be broken into a Hermitianpart(i.e., is a Hermitian matrix) and an antihermitian part(i.e., is an antihermitian matrix). Here, denotes the adjoint.

Square root method

The square root method is an algorithm which solves the matrixequation(1)for , with a symmetric matrix and a given vector. Convert to a triangular matrix such that(2)where is the transpose. Then(3)(4)so(5)giving the equations(6)(7)(8)(9)(10)These give(11)(12)(13)(14)(15)giving from . Now solve for in terms of the s and ,(16)(17)(18)which gives(19)(20)(21)Finally, find from the s and ,(22)(23)(24)giving the desired solution,(25)(26)(27)

Matrix inverse

The inverse of a square matrix , sometimes called a reciprocal matrix, is a matrix such that(1)where is the identity matrix. Courant and Hilbert (1989, p. 10) use the notation to denote the inverse matrix.A square matrix has an inverse iff the determinant (Lipschutz 1991, p. 45). The so-called invertible matrix theorem is major result in linear algebra which associates the existence of a matrix inverse with a number of other equivalent properties. A matrix possessing an inverse is called nonsingular, or invertible.The matrix inverse of a square matrix may be taken in the Wolfram Language using the function Inverse[m].For a matrix(2)the matrix inverse is(3)(4)For a matrix(5)the matrix inverse is(6)A general matrix can be inverted using methods such as the Gauss-Jordan elimination, Gaussian elimination, or LU decomposition.The inverse of a product of matrices and can be expressed in terms of and . Let(7)Then(8)and(9)Therefore,(10)so(11)where..

Matrix exponential

The power series that defines the exponential map also defines a map between matrices. In particular,(1)(2)(3)converges for any square matrix , where is the identity matrix. The matrix exponential is implemented in the Wolfram Language as MatrixExp[m].The Kronecker sum satisfies the nice property(4)(Horn and Johnson 1994, p. 208).Matrix exponentials are important in the solution of systems of ordinary differential equations (e.g., Bellman 1970).In some cases, it is a simple matter to express the matrix exponential. For example, when is a diagonal matrix, exponentiation can be performed simply by exponentiating each of the diagonal elements. For example, given a diagonal matrix(5)The matrix exponential is given by(6)Since most matrices are diagonalizable,it is easiest to diagonalize the matrix before exponentiating it.When is a nilpotent matrix, the exponential is given by a matrix polynomial because some power of vanishes...

Matrix equality

Two matrices and are said to be equal iff(1)for all . Therefore,(2)while(3)

Matrix direct sum

The matrix direct sum of matrices constructs a block diagonal matrix from a set of square matrices, i.e.,(1)(2)

Matrix addition

Denote the sum of two matrices and (of the same dimensions) by . The sum is defined by adding entries with the same indicesover all and . For example,Matrix addition is therefore both commutative andassociative.

Kronecker sum

The Kronecker sum is the matrix sum defined by(1)where and are square matrices of order and , respectively, is the identity matrix of order , and denotes the Kronecker product.For example, the Kronecker sum of two matrices and is given by(2)The Kronecker sum satisfies the nice property(3)where denotes a matrix exponential.

Antihermitian part

Every complex matrix can be broken into a Hermitian part(i.e., is a Hermitian matrix) and an antihermitian part(i.e., is an antihermitian matrix). Here, denotes the conjugate transpose.

Natural norm

Let be a vector norm of a vector such thatThen is a matrix norm which is said to be the natural norm induced (or subordinate) to the vector norm . For any natural norm,where is the identity matrix. The natural matrix norms induced by the L1-norm, L2-norm, and L-infty-norm are called the maximum absolute column sum norm, spectral norm, and maximum absolute row sum norm, respectively.

Maximum absolute row sum norm

The natural norm induced by the L-infty-normis called the maximum absolute row sum norm and is defined byfor a matrix . This matrix norm is implemented as Norm[m, Infinity].

Maximum absolute column sum norm

The natural norm induced by the L1-normis called the maximum absolute column sum norm and is defined byfor a matrix . This matrix norm is implemented as MatrixNorm[m, 1] in the Wolfram Language package MatrixManipulation` .

Matrix norm

Given a square complex or real matrix , a matrix norm is a nonnegative number associated with having the properties 1. when and iff , 2. for any scalar , 3. , 4. . Let , ..., be the eigenvalues of , then(1)The matrix -norm is defined for a real number and a matrix by(2)where is a vector norm. The task of computing a matrix -norm is difficult for since it is a nonlinear optimization problem with constraints.Matrix norms are implemented as Norm[m, p], where may be 1, 2, Infinity, or "Frobenius".The maximum absolute column sum norm is defined as(3)The spectral norm , which is the square root of the maximum eigenvalue of (where is the conjugate transpose),(4)is often referred to as "the" matrix norm.The maximum absolute row sum norm isdefined by(5), , and satisfy the inequality(6)..

Frobenius norm

The Frobenius norm, sometimes also called the Euclidean norm (a term unfortunately also used for the vector -norm), is matrix norm of an matrix defined as the square root of the sum of the absolute squares of its elements,(Golub and van Loan 1996, p. 55).The Frobenius norm can also be considered as a vectornorm.It is also equal to the square root of the matrix trace of , where is the conjugate transpose, i.e.,The Frobenius norm of a matrix is implemented as Norm[m, "Frobenius"] and of a vector as Norm[v, "Frobenius"].

Compatible

Let be the matrix norm associated with the matrix and be the vector norm associated with a vector . Let the product be defined, then and are said to be compatible if

Eigenspace

If is an square matrix and is an eigenvalue of , then the union of the zero vector and the set of all eigenvectors corresponding to eigenvalues is a subspace of known as the eigenspace of .

Schur's inequalities

Let be an matrix with complex (or real) entries and eigenvalues , , ..., , then(1)(2)(3)where is the complex conjugate.

Lyapunov's first theorem

A necessary and sufficient condition for all the eigenvalues of a real matrix to have negative real parts is that the equationhas as a solution where is an matrix and is a positive definite quadratic form.

Left eigenvector

A left eigenvector is defined as a row vector satisfyingIn many common applications, only right eigenvectors (and not left eigenvectors) need be considered. Hence the unqualified term "eigenvector" can be understood to refer to a right eigenvector.

Perron's theorem

If is an arbitrary set of positive numbers, then all eigenvalues of the matrix lie on the disk , where

Gershgorin circle theorem

The Gershgorin circle theorem (where "Gershgorin" is sometimes also spelled "Gersgorin" or "Gerschgorin") identifies a region in the complex plane that contains all the eigenvalues of a complex square matrix. For an matrix , define(1)Then each eigenvalue of is in at least one of the disks(2)The theorem can be made stronger as follows. Let be an integer with , and let be the sum of the magnitudes of the largest off-diagonal elements in column . Then each eigenvalue of is either in one of the disks(3)or in one of the regions(4)where is any subset of such that (Brualdi and Mellendorf 1994).

Ballieu's theorem

Let the characteristic polynomial of an complex matrix be written in the form(1)(2)Then for any set of positive numbers with and(3)all the eigenvalues (for , ..., ) lie on the closed disk in the complex plane.

Lu decomposition

A procedure for decomposing an matrix into a product of a lower triangular matrix and an upper triangular matrix ,(1)LU decomposition is implemented in the WolframLanguage as LUDecomposition[m].Written explicitly for a matrix, the decomposition is(2)(3)This gives three types of equations (4)(5)(6)This gives equations for unknowns (the decomposition is not unique), and can be solved using Crout's method. To solve the matrix equation(7)first solve for . This can be done by forward substitution(8)(9)for , ..., . Then solve for . This can be done by back substitution(10)(11)for , ..., 1.

Jordan matrix decomposition

The Jordan matrix decomposition is the decomposition of a square matrix into the form(1)where and are similar matrices, is a matrix of Jordan canonical form, and is the matrix inverse of . In other words, is a similarity transformation of a matrix in Jordan canonical form. The proof that any square matrix can be brought into Jordan canonical form is rather complicated (Turnbull and Aitken 1932; Faddeeva 1958, p. 49; Halmos 1958, p. 112).Jordan decomposition is also associated with the matrix equation and the special case .The Jordan matrix decomposition is implemented in the Wolfram Language as JordanDecomposition[m], and returns a list s, j. Note that the Wolfram Language takes the Jordan block in the Jordan canonical form to have 1s along the superdiagonal instead of the subdiagonal. For example, a Jordan decomposition of(2)is given by(3)(4)..

Eigenvector

Eigenvectors are a special set of vectors associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors (Marcus and Minc 1988, p. 144).The determination of the eigenvectors and eigenvalues of a system is extremely important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each eigenvector is paired with a corresponding so-called eigenvalue. Mathematically, two different kinds of eigenvectors need to be distinguished: left eigenvectors and right eigenvectors. However, for many problems in physics and engineering, it is sufficient to consider only right eigenvectors. The term "eigenvector" used without qualification in such applications..

Rational canonical form

Any square matrix has a canonical form without any need to extend the field of its coefficients. For instance, if the entries of are rational numbers, then so are the entries of its rational canonical form. (The Jordan canonical form may require complex numbers.) There exists a nonsingular matrix such that(1)called the rational canonical form, where is the companion matrix for the monic polynomial(2)The polynomials are called the "invariant factors" of , and satisfy for , ..., (Hartwig 1996). The polynomial is the matrix minimal polynomial and the product is the characteristic polynomial of .The rational canonical form is unique, and shows the extent to which the minimal polynomial characterizes a matrix. For example, there is only one matrix whose matrix minimal polynomial is , which is(3)in rational canonical form.Given a linear transformation , the vector space becomes a -module, that is a module over the ring of polynomials..

Eigenvalue

Eigenvalues are a special set of scalars associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic roots, characteristic values (Hoffman and Kunze 1971), proper values, or latent roots (Marcus and Minc 1988, p. 144).The determination of the eigenvalues and eigenvectors of a system is extremely important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each eigenvalue is paired with a corresponding so-called eigenvector (or, in general, a corresponding right eigenvector and a corresponding left eigenvector; there is no analogous distinction between left and right for eigenvalues).The decomposition of a square matrix into eigenvalues and eigenvectors is known in this work as eigen..

Jordan block

A matrix, also called a canonical box matrix, having zeros everywhere except along the diagonal and superdiagonal, with each element of the diagonal consisting of a single number , and each element of the superdiagonal consisting of a 1. For example,(Ayres 1962, p. 206).Note that the degenerate case of a matrix is considered a Jordan block even though it lacks a superdiagonal to be filled with 1s (Strang 1988, p. 454).A Jordan canonical form consists of one ormore Jordan blocks.The convention that 1s be along the subdiagonal instead of the superdiagonal is sometimes adopted instead (Faddeeva 1958, p. 50).

Eigen decomposition theorem

Let be a matrix of eigenvectors of a given square matrix and be a diagonal matrix with the corresponding eigenvalues on the diagonal. Then, as long as is a square matrix, can be written as an eigen decompositionwhere is a diagonal matrix. Furthermore, if is symmetric, then the columns of are orthogonal vectors.If is not a square matrix (for example, the space of eigenvectors of is one-dimensional), then cannot have a matrix inverse and does not have an eigen decomposition. However, if is (with ), then can be written using a so-called singular value decomposition.

Jordan basis

Given a matrix , a Jordan basis satisfiesandand provides the means by which any complex matrix can be written in Jordan canonical form.

Eigen decomposition

The matrix decomposition of a square matrix into so-called eigenvalues and eigenvectors is an extremely important one. This decomposition generally goes under the name "matrix diagonalization." However, this moniker is less than optimal, since the process being described is really the decomposition of a matrix into a product of three other matrices, only one of which is diagonal, and also because all other standard types of matrix decomposition use the term "decomposition" in their names, e.g., Cholesky decomposition, Hessenberg decomposition, and so on. As a result, the decomposition of a matrix into matrices composed of its eigenvectors and eigenvalues is called eigen decomposition in this work.Assume has nondegenerate eigenvalues and corresponding linearly independent eigenvectors which can be denoted(1)Define the matrices composed of eigenvectors(2)(3)and eigenvalues(4)where is a diagonal matrix...

Matrix diagonalization

Matrix diagonalization is the process of taking a square matrix and converting it into a special type of matrix--a so-called diagonal matrix--that shares the same fundamental properties of the underlying matrix. Matrix diagonalization is equivalent to transforming the underlying system of equations into a special set of coordinate axes in which the matrix takes this canonical form. Diagonalizing a matrix is also equivalent to finding the matrix's eigenvalues, which turn out to be precisely the entries of the diagonalized matrix. Similarly, the eigenvectors make up the new set of axes corresponding to the diagonal matrix.The remarkable relationship between a diagonalized matrix, eigenvalues, and eigenvectors follows from the beautiful mathematical identity (the eigen decomposition) that a square matrix can be decomposed into the very special form(1)where is a matrix composed of the eigenvectors of , is the diagonal matrix constructed..

Jacobson canonical form

Let be a matrix with the elementary divisors of its characteristic matrix expressed as powers of its irreducible polynomials in the field , and consider an elementary divisor . If , thenwhere is a matrix of the same order as having the element 1 in the lower left-hand corner and zeros everywhere else.Ayres, F. Jr. Schaum's Outline of Theory and Problems of Matrices. New York: Schaum, pp. 205-206, 1962.

Zero matrix

A zero matrix is an matrix consisting of all 0s (MacDuffee 1943, p. 27), denoted . Zero matrices are sometimes also known as null matrices (Akivis and Goldberg 1972, p. 71).A zero matrix is the additive identity of the additive group of matrices. The matrix exponential of is given by the identity matrix . An zero matrix can be generated in the Wolfram Language as ConstantArray[0, m, n].

Unit matrix

A unit matrix is an integer matrix consisting of all 1s. The unit matrix is often denoted , or if . Square unit matrices have determinant 0 for .An unit matrix can be generated in the Wolfram Language as ConstantArray[1, m, n].Let be a commutative ring with a multiplicative identity. Then the term "unit matrix" is also used to refer to an square matrix with entries in for which there exists an square matrix such thatwith is the identity matrix (MacDuffee 1943, p. 27; Marcus and Minc 1988, p. 69; Marcus and Minc 1992, p. 42).The term "unit matrix" is sometimes also used as a synonym for identitymatrix (Akivis and Goldberg 1972, p. 71).

Stolarsky array

An interspersion array given bythe first row of which is the Fibonacci numbers.

Special matrix

An integer matrix whose entries satisfy(1)There are special minimal matrices of size .

Interspersion

An array , of positive integers is called an interspersion if 1. The rows of comprise a partition of the positive integers, 2. Every row of is an increasing sequence, 3. Every column of is a (possibly finite) increasing sequence, 4. If and are distinct rows of and if and are any indices for which , then . If an array is an interspersion, then it is a sequence dispersion. If an array is an interspersion, then the sequence given by for some is a fractal sequence. Examples of interspersion are the Stolarsky array and Wythoff array.

Integer matrix

A matrix whose entries are all integers. Special cases which arise frequently are those having only as entries (e.g., Hadamard matrix), (0,1)-matrices having only as entries (e.g., adjacency matrix, Frobenius-König theorem, Gale-Ryser theorem, Hadamard's maximum determinant problem, hard square entropy constant, identity matrix, incidence matrix, Lam's problem), and those having as entries (e.g., alternating sign matrix, C-matrix).The zero matrix could be considered a degenerate case of an integer matrix. Integer matrices are sometimes also called "integral matrices," although this usage should be deprecated due to its confusing use of the term "integral" as an adjective.

Paley's theorem

Proved in 1933. If is an odd prime or and is any positive integer, then there is a Hadamard matrix of orderwhere is any positive integer such that . If is of this form, the matrix can be constructed with a Paley construction. If is divisible by 4 but not of the form (1), the Paley class is undefined. However, Hadamard matrices have been shown to exist for all for .

Paley construction

Hadamard matrices can be constructed using finite field GF() when and is odd. Pick a representation relatively prime to . Then by coloring white (where is the floor function) distinct equally spaced residues mod (, , , ...; , , , ...; etc.) in addition to 0, a Hadamard matrix is obtained if the powers of (mod ) run through . For example,(1)is of this form with and . Since , we are dealing with GF(11), so pick and compute its residues (mod 11), which are(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)Picking the first residues and adding 0 gives: 0, 1, 2, 4, 5, 8, which should then be colored in the matrix obtained by writing out the residues increasing to the left and up along the border (0 through , followed by ), then adding horizontal and vertical coordinates to get the residue to place in each square. (13)To construct , consider the representations . Only the first form can be used, with and . We therefore use GF(19), and color 9 residues plus 0 white.Now consider a more..

Paley class

The Paley class of a positive integer is defined as the set of all possible quadruples where(1) is an odd prime, and(2)

Le paige's theorem

Let be the matrix whose th entry is 1 if divides and 0 otherwise, let be the diagonal matrix , where is the totient function, and let be the matrix whose th entry is the greatest common divisor . Then Le Paige's theorem states thatwhere denotes the transpose (Le Paige 1878, Johnson 2003).As a corollary,(Smith 1876, Johnson 2003). For , 2, ... the first few values are 1, 1, 2, 4, 16, 32, 192, 768, ... (OEIS A001088).

Completely positive matrix

A completely positive matrix is a real square matrix that can be factorized aswhere stands for the transpose of and is any (not necessarily square) matrix with nonnegative elements. The least possible number of columns () of is called the factorization index (or the cp-rank) of . The study of complete positivity originated in inequality theory and quadratic forms (Diananda 1962, Hall and Newman 1963).There are two basic problems concerning complete positivity. 1. When is a given real matrix completely positive? 2. How can the cp-rank of be calculated? These two fundamental problems still remains open. The applications of completely positive matrices can be found in block designs (Hall and Newman 1963) and economic modelling (Gray and Wilson 1980).

Hard square entropy constant

Let be the number of (0,1)-matrices with no adjacent 1s (in either columns or rows). For , 2, ..., is given by 2, 7, 63, 1234, ... (OEIS A006506).The hard square entropy constant is defined by(OEIS A085850). It is not known if this constanthas an exact representation.The quantity arises in statistical physics (Baxter et al. 1980, Pearce and Seaton 1988), and is known as the entropy per site of hard squares. A related constant known as the hard hexagon entropy constant can also be defined.

Weisstein's conjecture

On July 10, 2003, Eric Weisstein computed the numbers of (0,1)-matrices all of whose eigenvalues are real and positive, obtaining counts for , 2, ... of 1, 3, 25, 543, 29281, .... Based on agreement with OEIS A003024, Weisstein then conjectured that is equal to the number of labeled acyclic digraphs on vertices.This result was subsequently proved by McKay et al. (2003, 2004).

Characteristic polynomial

The characteristic polynomial is the polynomial left-hand side of the characteristicequation(1)where is a square matrix and is the identity matrix of identical dimension. Samuelson's formula allows the characteristic polynomial to be computed recursively without divisions. The characteristic polynomial of a matrix may be computed in the Wolfram Language as CharacteristicPolynomial[m, x].The characteristic polynomial of a matrix(2)can be rewritten in the particularly nice form(3)where is the matrix trace of and is its determinant.Similarly, the characteristic polynomial of a matrix is(4)where Einstein summation has been used, whichcan also be written explicitly in terms of traces as(5)In general, the characteristic polynomial has the form(6)(7)where is the matrix trace of the matrix , , and is the sum of the -rowed diagonal minors of the matrix (Jacobson 1974, p. 109).Le Verrier's algorithm for computing the characteristic..

Symmetric part

Any square matrix can be written as a sum(1)where(2)is a symmetric matrix known as the symmetric part of and(3)is an antisymmetric matrix known as the antisymmetric part of . Here, is the transpose.The symmetric part of a tensor is denoted using parenthesesas(4)(5)Symbols for the symmetric and antisymmetric partsof tensors can be combined, for example(6)(Wald 1984, p. 26).

Antisymmetric part

Any square matrix can be written as a sum(1)where(2)is a symmetric matrix known as the symmetric part of and(3)is an antisymmetric matrix known as the antisymmetric part of . Here, is the transpose.Any rank-2 tensor can be written as a sum of symmetricand antisymmetric parts as(4)The antisymmetric part of a tensor is sometimes denoted using the special notation(5)For a general rank- tensor,(6)where is the permutation symbol. Symbols for the symmetric and antisymmetric parts of tensors can be combined, for example(7)(Wald 1984, p. 26).

Stability matrix

Given a system of two ordinary differential equations(1)(2)let and denote fixed points with , so(3)(4)Then expand about so(5)(6)To first-order, this gives(7)where the matrix, or its generalization to higher dimension, is called the stability matrix. Analysis of the eigenvalues (and eigenvectors) of the stability matrix characterizes the type of fixed point.

Ryser formula

A formula for the permanent of a matrixwhere the sum is over all subsets of , and is the number of elements in . The formula can be optimized by picking the subsets so that only a single element is changed at a time (which is precisely a Gray code), reducing the number of additions from to .It turns out that the number of disks moved after the th step in the tower of Hanoi is the same as the element which needs to be added or deleted in the th addend of the Ryser formula (Gardner 1988, Vardi 1991, p. 111).

Symmetric successive overrelaxation method

The symmetric successive overrelaxation (SSOR) method combines two successive overrelaxation method (SOR) sweeps together in such a way that the resulting iteration matrix is similar to a symmetric matrix it the case that the coefficient matrix of the linear system is symmetric. The SSOR is a forward SOR sweep followed by a backward SOR sweep in which the unknowns are updated in the reverse order. The similarity of the SSOR iteration matrix to a symmetric matrix permits the application of SSOR as a preconditioner for other iterative schemes for symmetric matrices. This is the primary motivation for SSOR, since the convergence rate is usually slower than the convergence rate for SOR with optimal .

Successive overrelaxation method

The successive overrelaxation method (SOR) is a method of solving a linear system of equations derived by extrapolating the Gauss-Seidel method. This extrapolation takes the form of a weighted average between the previous iterate and the computed Gauss-Seidel iterate successively for each component,where denotes a Gauss-Seidel iterate and is the extrapolation factor. The idea is to choose a value for that will accelerate the rate of convergence of the iterates to the solution.In matrix terms, the SOR algorithm can be written aswhere the matrices , , and represent the diagonal, strictly lower-triangular, and strictly upper-triangular parts of , respectively.If , the SOR method simplifies to the Gauss-Seidel method. A theorem due to Kahan (1958) shows that SOR fails to converge if is outside the interval .In general, it is not possible to compute in advance the value of that will maximize the rate of convergence of SOR. Frequently, some heuristic..

Stationary iterative method

Stationary iterative methods are methods for solving a linearsystem of equationswhere is a given matrix and is a given vector. Stationary iterative methods can be expressed in the simple formwhere neither nor depends upon the iteration count . The four main stationary methods are the Jacobi method, Gauss-Seidel method, successive overrelaxation method (SOR), and symmetric successive overrelaxation method (SSOR).The Jacobi method is based on solving for every variable locally with respect to the other variables; one iteration corresponds to solving for every variable once. It is easy to understand and implement, but convergence is slow.The Gauss-Seidel method is similar to the Jacobi method except that it uses updated values as soon as they are available. It generally converges faster than the Jacobi method, although still relatively slowly.The successive overrelaxation method can be derived from the Gauss-Seidel method by introducing..

Standard basis

A standard basis, also called a natural basis, is a special orthonormal vector basis in which each basis vector has a single nonzero entry with value 1. In -dimensional Euclidean space , the vectors are usually denoted (or ) with , ..., , where is the dimension of the vector space that is spanned by this basis according to(1)For example, in the Euclidean plane , the standard basis is(2)(3)Similarly, for Euclidean 3-space , the standard basis is(4)(5)(6)

Linear transformation

A linear transformation between two vector spaces and is a map such that the following hold: 1. for any vectors and in , and 2. for any scalar . A linear transformation may or may not be injective or surjective. When and have the same dimension, it is possible for to be invertible, meaning there exists a such that . It is always the case that . Also, a linear transformation always maps lines to lines (or to zero).The main example of a linear transformation is given by matrix multiplication. Given an matrix , define , where is written as a column vector (with coordinates). For example, consider(1)then is a linear transformation from to , defined by(2)When and are finite dimensional, a general linear transformation can be written as a matrix multiplication only after specifying a vector basis for and . When and have an inner product, and their vector bases, and , are orthonormal, it is easy to write the corresponding matrix . In particular, . Note that when using..

Linear system of equations

A linear system of equations is a set of linear equations in variables (sometimes called "unknowns"). Linear systems can be represented in matrix form as the matrix equation(1)where is the matrix of coefficients, is the column vector of variables, and is the column vector of solutions.If , then the system is (in general) overdetermined and there is no solution.If and the matrix is nonsingular, then the system has a unique solution in the variables. In particular, as shown by Cramer's rule, there is a unique solution if has a matrix inverse . In this case,(2)If , then the solution is simply . If has no matrix inverse, then the solution set is the translate of a subspace of dimension less than or the empty set.If two equations are multiples of each other, solutions are ofthe form(3)for a real number. More generally, if , then the system is underdetermined. In this case, elementary row and column operations can be used to solve the system as far..

Jacobi method

The Jacobi method is a method of solving a matrix equation on a matrix that has no zeros along its main diagonal (Bronshtein and Semendyayev 1997, p. 892). Each diagonal element is solved for, and an approximate value plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization.The Jacobi method is easily derived by examining each of the equations in the linear system of equations in isolation. If, in the th equation(1)solve for the value of while assuming the other entries of remain fixed. This gives(2)which is the Jacobi method.In this method, the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. The definition of the Jacobi method can be expressed with matrices as(3)where the matrices , , and represent thediagonal, strictly lower triangular, and strictly upper triangular..

Cramer's rule

Given a set of linear equations(1)consider the determinant(2)Now multiply by , and use the property of determinants that multiplication by a constant is equivalent to multiplication of each entry in a single column by that constant, so(3)Another property of determinants enables us to add a constant times any column to any column and obtain the same determinant, so add times column 2 and times column 3 to column 1,(4)If , then (4) reduces to , so the system has nondegenerate solutions (i.e., solutions other than (0, 0, 0)) only if (in which case there is a family of solutions). If and , the system has no unique solution. If instead and , then solutions are given by(5)and similarly for(6)(7)This procedure can be generalized to a set of equations so, given a system of linear equations(8)let(9)If , then nondegenerate solutions exist only if . If and , the system has no unique solution. Otherwise, compute(10)Then for . In the three-dimensional case, the..

Change of coordinates matrix

A change of coordinates matrix, also called a transition matrix, specifies the transformation from one vector basis to another under a change of basis. For example, if and are two vector bases in , and let be the coordinates of a vector in basis and its coordinates in basis .Write the basis vectors and for in coordinates relative to basis as(1)(2)This means that(3)(4)giving the change of coordinates matrix(5)which specifies the change of coordinates of a vector under the change of basis from to via(6)

Basis vector

A basis vector in an -dimensional vector space is one of any chosen set of vectors in the space forming a vector basis, i.e., having the property that every vector in the space can be written uniquely as a linear combination of them.For example, in the Euclidean plane, the unit vectors and form a vector basis since for any point ,so for this basis, and are basis vectors

Wronskian

The Wronskian of a set of functions , , ... is defined byIf the Wronskian is nonzero in some region, the functions are linearly independent. If over some range, the functions are linearly dependent somewhere in the range.

Vector basis

A vector basis of a vector space is defined as a subset of vectors in that are linearly independent and span . Consequently, if is a list of vectors in , then these vectors form a vector basis if and only if every can be uniquely written as(1)where , ..., are elements of the base field.When the base field is the reals so that for , the resulting basis vectors are -tuples of reals that span -dimensional Euclidean space . Other possible base fields include the complexes , as well as various fields of positive characteristic considered in algebra, number theory, and algebraic geometry.A vector space has many different vector bases, but there are always the same number of basis vectors in each of them. The number of basis vectors in is called the dimension of . Every spanning list in a vector space can be reduced to a basis of the vector space.The simplest example of a vector basis is the standard basis in Euclidean space , in which the basis vectors lie along each coordinate..

Orthogonality condition

A linear transformation(1)(2)(3)is said to be an orthogonal transformationif it satisfies the orthogonality condition(4)where Einstein summation has been used and is the Kronecker delta.

Orthogonal transformation

An orthogonal transformation is a linear transformation which preserves a symmetric inner product. In particular, an orthogonal transformation (technically, an orthonormal transformation) preserves lengths of vectors and angles between vectors,(1)In addition, an orthogonal transformation is either a rigid rotation or an improper rotation (a rotation followed by a flip). (Flipping and then rotating can be realized by first rotating in the reverse direction and then flipping.) Orthogonal transformations correspond to and may be represented using orthogonal matrices.The set of orthonormal transformations forms the orthogonal group, and an orthonormal transformation can be realized by an orthogonal matrix.Any linear transformation in three dimensions(2)(3)(4)satisfying the orthogonality condition(5)where Einstein summation has been used and is the Kronecker delta, is an orthogonal transformation. If is an orthogonal..

Kronecker product

Given an matrix and a matrix , their Kronecker product , also called their matrix direct product, is an matrix with elements defined by(1)where(2)(3)For example, the matrix direct product of the matrix and the matrix is given by the following matrix,(4)(5)The matrix direct product is implemented in the Wolfram Language as KroneckerProduct[a, b].The matrix direct product gives the matrix of the linear transformation induced by the vector space tensor product of the original vector spaces. More precisely, suppose that(6)and(7)are given by and . Then(8)is determined by(9)

Vandermonde matrix

A Vandermonde matrix is a type of matrix that arises in the polynomial least squares fitting, Lagrange interpolating polynomials (Hoffman and Kunze p. 114), and the reconstruction of a statistical distribution from the distribution's moments (von Mises 1964; Press et al. 1992, p. 83). A Vandermonde matrix of order is of the form(Press et al. 1992; Meyer 2000, p. 185). A Vandermonde matrix is sometimes also called an alternant matrix (Marcus and Minc 1992, p. 15). Note that some authors define the transpose of this matrix as the Vandermonde matrix (Marcus and Minc 1992, p. 15; Golub and Van Loan 1996; Aldrovandi 2001, p. 193).The solution of an Vandermonde matrix equation requires operations. The determinants of Vandermonde matrices have a particularly simple form...

Hamiltonian matrix

A complex matrix is said to be Hamiltonian if(1)where is the matrix of the form(2) is the identity matrix, and denotes the conjugate transpose of a matrix . An analogous definition holds in the case of real matrices by requiring that be symmetric, i.e., by replacing by in (1).Note that this criterion specifies precisely how a Hamiltonian matrix must look. Indeed, every Hamiltonian matrix (here assumed to have complex entries) must have the form(3)where satisfy and . This characterization holds for having strictly real entries as well by replacing all instances of the conjugate transpose operator in (1) by the transpose operator instead.

Upper triangular matrix

A triangular matrix of the form(1)Written explicitly,(2)An upper triangular matrix with elements f[i,j] above the diagonal could be formed in versions of the Wolfram Language prior to 6 using UpperDiagonalMatrix[f, n], which could be run after first loading LinearAlgebra`MatrixManipulation`.A strictly upper triangular matrix is an upper triangular matrix having 0s along the diagonal as well, i.e., for .

Positive matrix

A positive matrix is a real or integer matrix for which each matrix element is a positive number, i.e., for all , .Positive matrices are therefore a subset of nonnegativematrices.Note that a positive matrix is not the same as a positivedefinite matrix.

Gram matrix

Given a set of vectors (points in ), the Gram matrix is the matrix of all possible inner products of , i.e.,where denotes the transpose.The Gram matrix determines the vectors up to isometry.

Unimodular matrix

A unimodular matrix is a real square matrix with determinant (Born and Wolf 1980, p. 55; Goldstein 1980, p. 149). More generally, a matrix with elements in the polynomial domain of a field is called unimodular if it has an inverse whose elements are also in . A matrix is therefore unimodular iff its determinant is a unit of (MacDuffee 1943, p. 137).The matrix inverse of a unimodular realmatrix is another unimodular matrix.There are an infinite number of unimodular matrices not containing any 0s or . One parametric family is(1)Specific examples of unimodular matrices having small positive integer entries include(2)(Guy 1989, 1994).The th power of a unimodular matrix(3)is given by(4)where(5)and the are Chebyshev polynomials of the second kind,(6)(Born and Wolf 1980, p. 67)...

Tridiagonal matrix

A square matrix with nonzero elements only on the diagonal and slots horizontally or vertically adjacent the diagonal (i.e., along the subdiagonal and superdiagonal),Computing the determinant of such a matrix requires only (as opposed to ) arithmetic operations (Acton 1990, p. 332). Efficient solution of the matrix equation for , where is a tridiagonal matrix, can be performed in the Wolfram Language using LinearSolve on , represented as a SparseArray.

Positive definite matrix

An complex matrix is called positive definite if(1)for all nonzero complex vectors , where denotes the conjugate transpose of the vector . In the case of a real matrix , equation (1) reduces to(2)where denotes the transpose. Positive definite matrices are of both theoretical and computational importance in a wide variety of applications. They are used, for example, in optimization algorithms and in the construction of various linear regression models (Johnson 1970).A linear system of equations with a positive definite matrix can be efficiently solved using the so-called Cholesky decomposition. A positive definite matrix has at least one matrix square root. Furthermore, exactly one of its matrix square roots is itself positive definite.A necessary and sufficient condition for a complex matrix to be positive definite is that the Hermitian part(3)where denotes the conjugate transpose, be positive definite. This means that a real matrix..

Triangular matrix

An upper triangular matrix is defined by(1)Written explicitly,(2)A lower triangular matrix is defined by(3)Written explicitly,(4)

Permutation matrix

A permutation matrix is a matrix obtained by permuting the rows of an identity matrix according to some permutation of the numbers 1 to . Every row and column therefore contains precisely a single 1 with 0s everywhere else, and every permutation corresponds to a unique permutation matrix. There are therefore permutation matrices of size , where is a factorial.The permutation matrices of order two are given by(1)and of order three are given by(2)A permutation matrix is nonsingular, and the determinant is always . In addition, a permutation matrix satisfies(3)where is a transpose and is the identity matrix.Applied to a matrix , gives with rows interchanged according to the permutation vector , and gives with the columns interchanged according to the given permutation vector.Interpreting the 1s in an permutation matrix as rooks gives an allowable configuration of nonattacking rooks on an chessboard. However, the permutation matrices provide..

Toeplitz matrix

Given numbers , where , ..., , 0, 1, ..., , a Toeplitz matrix is a matrix which has constant values along negative-sloping diagonals, i.e., a matrix of the formMatrix equations ofthe formcan be solved with operations. Typical problems modelled by Toeplitz matrices include the numerical solution of certain differential and integral equations (regularization of inverse problems), the computation of splines, time series analysis, signal and image processing, Markov chains, and queuing theory (Bini 1995).

Periodic matrix

A square matrix such that the matrix power for a positive integer is called a periodic matrix. If is the least such integer, then the matrix is said to have period . If , then and is called idempotent.

Equivalent matrix

Two matrices and are equal to each other, written , if they have the same dimensions and the same elements for , ..., and , ..., .Gradshteyn and Ryzhik (2000) call an matrix "equivalent" to another matrix ifffor and any suitable nonsingular and matrices, respectively.

Pauli matrices

The Pauli matrices, also called the Pauli spin matrices, are complex matrices that arise in Pauli's treatment of spin in quantum mechanics. They are defined by(1)(2)(3)(Condon and Morse 1929, p. 213; Gasiorowicz 1974, p. 232; Goldstein 1980, p. 156; Liboff 1980, p. 453; Arfken 1985, p. 211; Griffiths 1987, p. 115; Landau and Lifschitz 1991, p. 204; Landau 1996, p. 224).The Pauli matrices are implemented in the Wolfram Language as PauliMatrix[n], where , 2, or 3.The Pauli spin matrices satisfy the identities(4)(5)(6)where is the identity matrix, is the Kronecker delta, is the permutation symbol, the leading is the imaginary unit (not the index ), and Einstein summation is used in (6) to sum over the index (Arfken 1985, p. 211; Griffiths 1987, p. 139; Landau and Lifschitz 1991, pp. 204-205).The Pauli matrices plus the identity matrix form a complete set, so any matrix..

Elementary matrix

An matrix is an elementary matrix if it differs from the identity by a single elementary row or column operation.

Submatrix

A submatrix of an matrix (with , ) is a matrix formed by taking a block of the entries of this size from the original matrix.

Strictly upper triangular matrix

A strictly upper triangular matrix is an upper triangular matrix having 0s along the diagonal as well as the lower portion, i.e., a matrix such that for . Written explicitly,

Doubly stochastic matrix

A doubly stochastic matrix is a matrix such that andis some field for all and . In other words, both the matrix itself and its transpose are stochastic.The following tables give the number of distinct doubly stochastic matrices (and distinct nonsingular doubly stochastic matrices) over for small .doubly stochastic matrices over 21, 2, 16, 512, ...31, 3, 81, ...41, 4, 256, ...doubly stochastic nonsingular matrices over 21, 2, 6, 192, ...31, 2, 54, ...41, 4, 192, ...Horn (1954) proved that if , where and are complex -vectors, is doubly stochastic, and , , ..., are any complex numbers, then lies in the convex hull of all the points , , where is the set of all permutations of . Sherman (1955) also proved the converse.Birkhoff (1946) proved that any doubly stochastic matrix is in the convex hull of permutation matrices for . There are several proofs and extensions of this result (Dulmage and Halperin 1955, Mendelsohn and Dulmage 1958, Mirsky 1958, Marcus..

Strictly lower triangular matrix

A lower triangular matrix having 0s along the diagonal as well as the upper portion, i.e., a matrix such that for . Written explicitly,

Doubly nonnegative matrix

A doubly nonnegative matrix is a real positive semidefinite square matrix with nonnegative entries. Any doubly nonnegative matrix of order can be expressed as a Gram matrix of vectors (where is the rank of ), with each pair of vectors possessing a nonnegative inner product, i.e., . Every completely positive matrix is doubly nonnegative.

Nonsingular matrix

A square matrix that is not singular, i.e., one that has a matrix inverse. Nonsingular matrices are sometimes also called regular matrices. A square matrix is nonsingular iff its determinant is nonzero (Lipschutz 1991, p. 45). For example, there are 6 nonsingular (0,1)-matrices:The following table gives the numbers of nonsingular matrices for certain matrix classes.matrix typeOEIScounts for , 2, ...-matricesA0569892, 48, 11808, ...-matricesA0569902, 8, 192, 22272, ...-matricesA0551651, 6, 174, 22560, ...

Dirac matrices

The Dirac matrices are a class of matrices which arise in quantum electrodynamics. There are a variety of different symbols used, and Dirac matrices are also known as gamma matrices or Dirac gamma matrices.The Dirac matrices may be implemented in a future version of the Wolfram Language as DiracGammaMatrix[n], where , 2, 3, 4, or 5.The Dirac matrices are defined as the matrices(1)(2)where are the () Pauli matrices, is the identity matrix, , 2, 3, and is the Kronecker product. Explicitly, this set of Dirac matrices is then given by(3)(4)(5)(6)(7)(8)(9)These matrices satisfy the anticommutation identities(10)(11)where is the Kronecker delta, the commutation identity(12)and are cyclic under permutations of indices(13)(14)A total of 16 Dirac matrices can be defined via(15)for , 1, 2, 3 and where (Arfken 1985, p. 212). These matrices satisfy 1. , where is the determinant, 2. , 3. , where denotes the conjugate transpose, making them Hermitian,..

Nonpositive matrix

A nonpositive matrix is a real or integer matrix for which each matrix element is a nonpositive number, i.e., for all , .Nonpositive matrices are therefore a superset of negative matrices.

Square matrix

A matrix for which horizontal and vertical dimensions are the same (i.e., an matrix). In versions of the Wolfram Language prior to 6, a matrix could be tested to see if it was square using SquareMatrixQ[m] after loading the package LinearAlgebra`MatrixManipulation`.Consider the numbers of matrices on distinct symbols. The number of distinct matrices modulo rotations and reflections for , 2, ... are given by 1, 3, 45360, ... (OEIS A086829).Consider an matrix consisting of the integers 1 to arranged in any order. Then the maximal determinants possible for , 2, ... are 1, 10, 412, 40800, 6839492, ... (OEIS A085000).Consider an matrix with single copies of the digits 1, 2, ..., and the rest of the elements zero. Then the triangle of matrices with digits , 1, ..., that are rotationally and reflectively distinct is 1, 1; 1, 1, 2, 3, 3; 1, 3, 12, 66, 378, 1890, 7560, 22680, 45360, 45360; ... (OEIS A087074)...

Nonnegative matrix

A nonnegative matrix is a real or integer matrix for which each matrix element is a nonnegative number, i.e., for all , .Nonnegative matrices are therefore a superset of positive matrices.Nonnegative matrices are important in a variety of applications and have a number of attractive mathematical properties. Together with positive semidefinite matrices, they therefore serve as a natural generalization of nonnegative real numbers (Johnson 1981). The most fundamental properties of nonnegative matrices require fairly advanced mathematics and were established by Perron (1907) and Frobenius (1912).

Diagonally dominant matrix

A square matrix is called diagonally dominant if for all . is called strictly diagonally dominant if for all .A strictly diagonally dominant matrix is nonsingular. A symmetric diagonally dominant real matrix with nonnegative diagonal entries is positive semidefinite.If a matrix is strictly diagonally dominant and all its diagonal elements are positive, then the real parts of its eigenvalues are positive; if all its diagonal elements are negative, then the real parts of its eigenvalues are negative. These results follow from the Gershgorin circle theorem.

Nilpotent matrix

There are two equivalent definitions for a nilpotent matrix. 1. A square matrix whose eigenvaluesare all 0. 2. A square matrix such that is the zero matrix for some positive integer matrix power , known as the index (Ayres 1962, p. 11).

Diagonal matrix

A diagonal matrix is a square matrix of the form(1)where is the Kronecker delta, are constants, and , 2, ..., , with no implied summation over indices. The general diagonal matrix is therefore of the form(2)often denoted . The diagonal matrix with elements can be computed in the Wolfram Language using DiagonalMatrix[l].The determinant of a diagonal matrix given by is . This means that , so for , 2, ..., the first few values are 1, 2, 6, 24, 120, 720, 5040, 40320, ... (OEIS A000142).Given a matrix equation ofthe form(3)multiply through to obtain(4)Since in general, for , this can be true only if off-diagonal components vanish. Therefore, must be diagonal.Given a diagonal matrix , the matrix power can be computed simply by taking each element to the power in question,(5)(6)Similarly, a matrix exponential can be performedsimply by exponentiating each of the diagonal elements,(7)..

Negative matrix

A negative matrix is a real or integer matrix for which each matrix element is a negative number, i.e., for all , .Negative matrices are therefore a subset of nonpositivematrices.

Coxeter matrix

An square matrix with(1)(2)for all , ..., .

Covariance matrix

Given sets of variates denoted , ..., , the first-order covariance matrix is defined bywhere is the mean. Higher order matrices are given byAn individual matrix element is called the covariance of and .

Monotonic matrix

A monotonic matrix of order is an matrix in which every element is either 0 or contains a number from the set subject to the conditions 1. The filled-in elements in each row are strictly increasing, 2. The filled-in elements in each column are strictly decreasing, and 3. Positive slope condition: for two filled-in cells with same element, the one further right is in an earlier row. The numbers of distinct monotonic matrices of orders , 2, ... are 2, 19, 712, ... (OEIS A086976). For example, the monotonic matrices of order 2 areThe maximum numbers of cells occupied in an matrix for , 2, ... are given by 1, 2, 5, 8, 11, 14, 19, ... (OEIS A070214).

Copositive matrix

A copositive matrix is a real square matrix that makes the corresponding quadratic formnonnegative for all nonnegative -vectors . Copositive matrices have many applications including in the field of control theory.The cone consisting of all copositive matrices of order is just the dual cone of all completely positive matrices of order .

Minimal matrix

A matrix with 0 determinant whose determinant becomes nonzero when any element on or below the diagonal is changed from 0 to 1. An example isThere are minimal special matrices of size .

Conjugate matrix

A conjugate matrix is a matrix obtained from a given matrix by taking the complex conjugate of each element of (Courant and Hilbert 1989, p. 9), i.e.,The notation is sometimes also used, which can lead to confusion since this symbol is also used to denote the conjugate transpose.Using a matrix in a similarity transformation of a given matrix is also known as conjugating by . In this case, and are known as similar matrices.

Singular matrix

A square matrix that does not have a matrix inverse. A matrix is singular iff its determinant is 0. For example, there are 10 singular (0,1)-matrices:The following table gives the numbers of singular matrices for certain matrix classes.matrix typeOEIScounts for , 2, ...-matricesA0579811, 33, 7875, 15099201, ...-matricesA0579820, 8, 320, 43264, ...-matricesA0467471, 10, 338, 42976, ...

Matrix equation

Nonhomogeneous matrix equations of the form(1)can be solved by taking the matrix inverse to obtain(2)This equation will have a nontrivial solution iff the determinant . In general, more numerically stable techniques of solving the equation include Gaussian elimination, LU decomposition, or the square root method.For a homogeneous matrix equation(3)to be solved for the s, consider the determinant(4)Now multiply by , which is equivalent to multiplying the first column (or any column) by ,(5)The value of the determinant is unchanged if multiples of columns are added to other columns. So add times column 2, ..., and times column to the first column to obtain(6)But from the original matrix, each of the entries in thefirst columns is zero since(7)so(8)Therefore, if there is an which is a solution, the determinant is zero. This is also true for , ..., , so the original homogeneous system has a nontrivial solution for all s only if the determinant..

Congruent matrices

Two square matrices and are called congruent if there exists a nonsingular matrix such thatwhere is the transpose.

Similar matrices

Two square matrices and that are related by(1)where is a square nonsingular matrix are said to be similar. A transformation of the form is called a similarity transformation, or conjugation by . For example,(2)and(3)are similar under conjugation by(4)Similar matrices represent the same linear transformation after a change of basis (for the domain and range simultaneously). Recall that a matrix corresponds to a linear transformation, and a linear transformation corresponds to a matrix after choosing a basis ,(5)Changing the basis changes the coefficients of the matrix,(6)If uses the standard basis vectors, then is the matrix using the basis vectors .

Complex matrix

A matrix whose elements may contain complexnumbers.The matrix product of two complex matrices is given by(1)where(2)(3)(4)(5)(6)(7)(8)(9)Hadamard (1893) proved that the determinant of any complex matrix with entries in the closed unit disk satisfies(10)(Hadamard's maximum determinant problem), with equality attained by the Vandermonde matrix of the roots of unity (Faddeev and Sominskii 1965, p. 331; Brenner 1972). The first few values for , 2, ... are 1, 2, , 16, , 216, ....Studying the maximum possible eigenvalue norms for random complex matrices is computationally intractable. Although average properties of the distribution of can be determined, finding the maximum value corresponds to determining if the set of matrices contains a singular matrix, which has been proven to be an NP-complete problem (Poljak and Rohn 1993, Kaltofen 2000). The above plots show the distributions for , , and matrix eigenvalue norms for elements..

Lower triangular matrix

A triangular matrix of the form(1)Written explicitly,(2)A lower triangular matrix with elements f[i,j] below the diagonal could be formed in versions of the Wolfram Language prior to 6 using LowerDiagonalMatrix[f, n], which could be run after first loading LinearAlgebra`MatrixManipulation`.A strictly lower triangular matrix is a lower triangular matrix having 0s along the diagonal as well, i.e., for .

Least common multiple matrix

Let be a set of distinct positive integers. Then the matrix having the least common multiple of and as its th entry is called the least common multiple matrix on .

Kac matrix

The tridiagonal matrix (also called the Clement matrix) defined byThe eigenvalues are for , 1, ..., .

Commuting matrices

Two matrices and which satisfy(1)under matrix multiplication are said tobe commuting.In general, matrix multiplication is not commutative. Furthermore, in general there is no matrix inverse even when . Finally, can be zero even without or . And when , we may still have , a simple example of which is provided by(2)(3)for which(4)but(5)(Taussky 1957).

Jacobi rotation matrix

A matrix used in the Jacobi transformation method of diagonalizing matrices. The Jacobi rotation matrix contains 1s along the diagonal, except for the two elements in rows and columns and . In addition, all off-diagonal elements are zero except the elements and . The rotation angle for an initial matrix is chosen such thatThen the corresponding Jacobi rotation matrix which annihilates the off-diagonal element is

Indefinite matrix

An complex matrix is called indefinite if nonzero vectors and exist such thatwhere denotes the conjugate transpose.

Reducible matrix

A square matrix is called reducible if the indices 1, 2, ..., can be divided into two disjoint nonempty sets , , ..., and , , ..., (with ) such thatfor , 2, ..., and , 2, ..., .A matrix is reducible if and only if it can be placed into block upper-triangular form by simultaneous row/column permutations. In addition, a matrix is reducible if and only if its associated digraph is not strongly connected.A square matrix that is not reducible is said tobe irreducible.

Redheffer matrix

A Redheffer matrix is a square -matrix with elements equal to 1 if or ( divides ), and 0 otherwise. For , 2, ..., the first few Redheffer matrices areThe Redheffer matrix of order 255 is illustrated above.The determinant of the Redheffer matrix is equal to the Mertens function . For , 2, ..., the first few values are therefore 1, 0, , , , , , , , ... (OEIS A002321).The number of unit eigenvalues of the Redheffer matrix for is equal to(Vaughan 1993, 1996; Trott 2004, p. 57), giving the first few values as 1, 0,1, 1, 2, 3, 4, 4, 5, 6, 7, 8, 9, 10, 11, ... (OEIS A083058).

Hessenberg matrix

A Hessenberg matrix is a matrix of the formHessenberg matrices were first investigated by Karl Hessenberg (1904-1959), a German engineer whose dissertation investigated the computation of eigenvalues and eigenvectors of linear operators.

Roth's removal rule

If the matrices , , , and satisfythenwhere is the identity matrix.

Horn's theorem

Let(1)and(2)Then there exists an Hermitian matrix with eigenvalues and diagonal elements iff(3)for all and with equality for . The theorem is sometimes also known as Schur's theorem.

Matrix trace

The trace of an square matrix is defined to be(1)i.e., the sum of the diagonal elements. The matrix trace is implemented in the Wolfram Language as Tr[list]. In group theory, traces are known as "group characters."For square matrices and , it is true that(2)(3)(4)(Lang 1987, p. 40), where denotes the transpose. The trace is also invariant under a similarity transformation(5)(Lang 1987, p. 64). Since(6)(where Einstein summation is used here to sumover repeated indices), it follows that(7)(8)(9)(10)(11)where is the Kronecker delta.The trace of a product of two square matrices is independent of the order of the multiplication since(12)(13)(14)(15)(16)(again using Einstein summation). Therefore, the trace of the commutator of and is given by(17)The trace of a product of three or more square matrices, on the other hand, is invariant only under cyclic permutations of the order of multiplication of the matrices,..

Matrix signature

A real, nondegenerate symmetric matrix , and its corresponding symmetric bilinear form , has signature if there is a nondegenerate matrix such that is a diagonal matrix with 1s and s. In this case, is a diagonal quadratic form.For example,gives a symmetric bilinear form called the Lorentzian inner product, which has signature .

Diagonalizable matrix

An -matrix is said to be diagonalizable if it can be written on the formwhere is a diagonal matrix with the eigenvalues of as its entries and is a nonsingular matrix consisting of the eigenvectors corresponding to the eigenvalues in .The diagonalization theorem states that an matrix is diagonalizable if and only if has linearly independent eigenvectors, i.e., if the matrix rank of the matrix formed by the eigenvectors is . Matrix diagonalization (and most other forms of matrix decomposition) are particularly useful when studying linear transformations, discrete dynamical systems, continuous systems, and so on.All normal matrices are diagonalizable, but not all diagonalizable matrices are normal. The following table gives counts of diagonalizable matrices of various kinds where the elements of may be real or complex.matrix typeOEIScounts for , 2, ...(-1,0,1)-matrixA0914703, 65, 15627, ...(-1,1)-matrixA0914712, 12, 464, 50224,..

Diagonal

A diagonal of a square matrix which is traversed in the "southeast" direction. "The" diagonal (or "main diagonal," or "principal diagonal," or "leading diagonal") of an square matrix is the diagonal from to .The solidus symbol / used to denote division (e.g., ) is sometimes also known as a diagonal.

Superdiagonal

The superdiagonal of a square matrix is the set of elements directly above the elements comprising the diagonal. For example, in the following matrix, the diagonal elements are denoted and the superdiagonal elements are denoted ,

Subdiagonal

The subdiagonal of a square matrix is the set of elements directly under the elements comprising the diagonal. For example, in the following matrix, the diagonal elements are denoted and the subdiagonals are denoted ,

Sturmian separation theorem

Let be a sequence of symmetric matrices of increasing order with , 2, ..., and , 2, ..., . Let be the th eigenvalue of for , 2, ..., , where the ordering is given byThen it follows that

Characteristic equation

The characteristic equation is the equation which is solved to find a matrix's eigenvalues, also called the characteristic polynomial. For a general matrix , the characteristic equation in variable is defined by(1)where is the identity matrix and is the determinant of the matrix . Writing out explicitly gives(2)so the characteristic equation is given by(3)The solutions of the characteristic equation are called eigenvalues, and are extremely important in the analysis of many problems in mathematics and physics. The polynomial left-hand side of the characteristic equation is known as the characteristic polynomial.

Skew diagonal

A diagonal of a square matrix which is traversed in the "northeast" direction. "The" skew diagonal (or "secondary diagonal") of an square matrix is the skew diagonal from to .

Segre characteristic

A set of integers that give the orders of the blocks in a Jordan canonical form, with those integers corresponding to submatrices containing the same latent root bracketed together. For example, the Segre characteristic ofis [(21)31(21)] (Frazer et al. 1955, p. 94).

Augmented matrix

An augmented matrix is a matrix obtained by adjoining a row or column vector, or sometimes another matrix with the same vertical dimension.The most common use of an augmented matrix is in the application of Gaussian elimination to solve a matrix equation of the form(1)by forming the column-appended augmented matrix(2)Augmented matrices can also be used to find a matrix inverse of by forming(3)with the identity matrix of appropriate dimension, then using elementary row operations to reduce the result to the form(4)The payoff matrix in gametheory is a case of a more complicated augmented matrix .

Jacobi transformation

A method of matrix diagonalization using Jacobi rotation matrices . It consists of a sequence of orthogonal similarity transformations of the formeach of which eliminates one off-diagonal element. Each application of affects only rows and columns of , and the sequence of such matrices is chosen so as to eliminate the off-diagonal elements.

Woodbury formula

The Woodbury formulais a formula that allows a perturbed matrix to be computed for a change to a given matrix , where is an identity matrix. This formula expands the scope of application of the Sherman-Morrison formula. In particular, in the Woodbury formula, and can be any (dimensionally compatible) matrices rather than simple row or column vectors and for the Sherman-Morrison formula.

Invertible matrix theorem

The invertible matrix theorem is a theorem in linear algebra which gives a series of equivalent conditions for an square matrix to have an inverse. In particular, is invertible if and only if any (and hence, all) of the following hold: 1. is row-equivalent to the identity matrix . 2. has pivot positions. 3. The equation has only the trivial solution . 4. The columns of form a linearly independent set. 5. The linear transformation is one-to-one. 6. For each column vector , the equation has a unique solution. 7. The columns of span . 8. The linear transformation is a surjection. 9. There is an matrix such that . 10. There is an matrix such that . 11. The transpose matrix is invertible. 12. The columns of form a basis for . 13. The column space of is equal to . 14. The dimension of the column space of is . 15. The rank of is . 16. The null space of is . 17. The dimension of the null space of is 0. 18. fails to be an eigenvalue of . 19. The determinant of is not zero. 20. The orthogonal..

Positive eigenvalued matrix

The numbers of positive definite matrices of given types are summarized in the following table. For example, the three positive eigenvalues (0,1)-matrices areall of which have eigenvalue 1 with degeneracy oftwo.matrix typeOEIScounts(0,1)-matrixA0030241, 3, 25, 543, 29281, ...(-1,0,1)-matrixA0855061, 5, 133, 18905, ...Weisstein's conjecture proposed that positive eigenvalued -matrices were in one-to-one correspondence with labeled acyclic digraphs on nodes, and this was subsequently proved by McKay et al. (2003, 2004). Counts of both are therefore given by the beautiful recurrence equationwith (Harary and Palmer 1973, p. 19; Robinson 1973, pp. 239-273).

Immanant

For an matrix, let denote any permutation , , ..., of the set of numbers 1, 2, ..., , and let be the character of the symmetric group corresponding to the partition . Then the immanant is defined aswhere the summation is over the permutations of the symmetric group and

Wigner's semicircle law

Let be a real symmetric matrix of large order having random elements that for are independently distributed with equal densities, equal second moments , and th moments bounded by constants independent of , , and . Further, let be the number of eigenvalues of that lie in the interval for real . Then(Wigner 1955, 1958). This law was first observed by Wigner (1955) for certain special classes of random matrices arising in quantum mechanical investigations.The distribution of eigenvalues of a symmetric random matrix with entries chosen from a standard normal distribution is illustrated above for a random matrix.Note that a large real symmetric matrix with random entries taken from a uniform distribution also obeys the semicircle law with the exception that it also possesses exactly one large eigenvalue...

Girko's circular law

Let be (possibly complex) eigenvalues of a set of random real matrices with entries independent and taken from a standard normal distribution. Then as , is uniformly distributed on the unit disk in the complex plane. For small , the distribution shows a concentration along the real line accompanied by a slight paucity above and below (with interesting embedded structure). However, as , the concentration about the line disappears and the distribution becomes truly uniform.

Block matrix

A block matrix is a matrix that is defined using smallermatrices, called blocks. For example,(1)where , , , and are themselves matrices, is a block matrix. In the specific example(2)(3)(4)(5)therefore, it is the matrix(6)Block matrices can be created using ArrayFlatten.When two block matrices have the same shape and their diagonal blocks are square matrices, then they multiply similarly to matrix multiplication. For example,(7)Note that the usual rules of matrix multiplication hold even when the block matrices are not square (assuming that the block sizes correspond). Of course, matrix multiplication is in general not commutative, so in these block matrix multiplications, it is important to keep the correct order of the multiplications.When the blocks are square matrices, the set of invertible block matrices is a group isomorphic to the general linear group , where is the ring of square matrices...

Orthogonal decomposition

The orthogonal decomposition of a vector in is the sum of a vector in a subspace of and a vector in the orthogonal complement to .The orthogonal decomposition theorem states that if is a subspace of , then each vector in can be written uniquely in the formwhere is in and is in . In fact, if is any orthogonal basis of , thenand .Geometrically, is the orthogonal projection of onto the subspace and is a vector orthogonal to

Nullity

The nullity of a linear transformation of vector spaces is the dimension of its null space. The nullity and the map rank add up to the dimension of , a result sometimes known as the rank-nullity theorem.

Null space

If is a linear transformation of , then the null space Null(), also called the kernel , is the set of all vectors such thati.e.,The term "null space" is most commonly written as two separate words (e.g., Golub and Van Loan 1989, pp. 49 and 602; Zwillinger 1995, p. 128), although other authors write it as a single word "nullspace" (e.g., Anton 1994, p. 259; Robbin 1995, pp. 123 and 180).The Wolfram Language command NullSpace[v1, v2, ...] returns a list of vectors forming a vector basis for the nullspace of a set of vectors over the rationals (or more generally, over whatever base field contains the input vectors).

Kernel

For any function (where and are any sets), the kernel (also called the null space) is defined byso the kernel gives the elements from the original set that are mapped to zero by the function. is therefore a subset of The related image of a function is defined by is therefore a subset of .

Hadamard matrix

A Hadamard matrix is a type of square (-1,1)-matrix invented by Sylvester (1867) under the name of anallagmatic pavement, 26 years before Hadamard (1893) considered them. In a Hadamard matrix, placing any two columns or rows side by side gives half the adjacent cells the same sign and half the other sign. When viewed as pavements, cells with 1s are colored black and those with s are colored white. Therefore, the Hadamard matrix must have white squares (s) and black squares (1s).A Hadamard matrix of order is a solution to Hadamard's maximum determinant problem, i.e., has the maximum possible determinant (in absolute value) of any complex matrix with elements (Brenner and Cummings 1972), namely . An equivalent definition of the Hadamard matrices is given by(1)where is the identity matrix.A Hadamard matrix of order corresponds to a Hadamard design (, , ), and a Hadamard matrix gives a graph on vertices known as a Hadamard graphA complete set of Walsh..

Parodi's theorem

The eigenvalues satisfying , where is the characteristic polynomial, lie in the unions of the disks

Ostrowski's theorem

Let be a matrix with positive coefficients and be the positive eigenvalue in the Frobenius theorem, then the eigenvalues satisfy the inequality(1)where(2)(3)and , 2, ..., .

Wielandt's theorem

Let the matrix satisfy the conditions of the Perron-Frobenius theorem and the matrix satisfyfor , 2, ..., . Then any eigenvalue of satisfies the inequality with the equality sign holding only when there exists an matrix (where is the Kronecker delta) and

Mccoy's theorem

If two square matrices and are simultaneously upper triangularizable by similarity transforms, then there is an ordering , ..., of the eigenvalues of and , ..., of the eigenvalues of so that, given any polynomial in noncommuting variables, the eigenvalues of are the numbers with , ..., . McCoy's theorem states the converse: If every polynomial exhibits the correct eigenvalues in a consistent ordering, then and are simultaneously triangularizable.

Generalized eigenvector

A generalized eigenvector for an matrix is a vector for whichfor some positive integer . Here, denotes the identity matrix. The smallest such is known as the generalized eigenvector order of the generalized eigenvector. In this case, the value is the generalized eigenvalue to which is associated and the linear span of all generalized eigenvectors associated to some generalized eigenvalue is known as the generalized eigenspace for .As the name suggests, generalized eigenvectors are generalizations of eigenvectors of the usual kind; more precisely, an eigenvector is a generalized eigenvector corresponding to .Generalized eigenvectors are of particular importance for matrices which fail to be diagonalizable. Indeed, for such matrices, at least one eigenvalue has geometric multiplicity larger than its algebraic multiplicity, thereby implying that the collection of linearly independent eigenvectors of is "too small"..

Matrix spectrum

The eigenvalues of a matrix are called its spectrum, and are denoted . If , then the determinant of is given by

Frobenius theorem

Let be a matrix with positive coefficients so that for all , 2, ..., , then has a positive eigenvalue , and all its eigenvalues lie on the closed disk

Spectral radius

Let be an matrix with complex or real elements with eigenvalues , ..., . Then the spectral radius of isi.e., the largest absolute value (or complexmodulus) of its eigenvalues.The spectral radius of a finite graph is defined as the largest absolute value of its graph spectrum, i.e., the largest absolute value of the graph eigenvalues (eigenvalues of the adjacency matrix).

Spectral norm

The natural norm induced by the L2-norm. Let be the conjugate transpose of the square matrix , so that , then the spectral norm is defined as the square root of the maximum eigenvalue of , i.e.,(1)(2)This matrix norm is implemented as Norm[m,2].

Lyapunov's second theorem

If all the eigenvalues of a real matrix have real parts, then to an arbitrary negative definite quadratic form with there corresponds a positive definite quadratic form such that if one takesthen and satisfy

Hilbert matrix

A matrix with elements(1)for , 2, ..., . Hilbert matrices are implemented in the Wolfram Language by HilbertMatrix[m, n]. The figure above shows a plot of the Hilbert matrix with elements colored according to their values.Hilbert matrices whose entries are specified as machine-precision numbers are difficult to invert using numerical techniques.The determinants for the first few values of for , 2, ... are given by one divided by 1, 12, 2160, 6048000, 266716800000, ... (OEIS A005249). The terms of sequence have the closed form(2)(3)(4)where is the Glaisher-Kinkelin constant and is the Barnes G-function. The numerical values are given in the following table.det()1123456The elements of the matrix inverse of the Hilbert matrix are given analytically by(5)(Choi 1983, Richardson 1999).

Condition number

The ratio of the largest to smallest singular value in the singular value decomposition of a matrix. The base- logarithm of is an estimate of how many base- digits are lost in solving a linear system with that matrix. In other words, it estimates worst-case loss of precision. A system is said to be singular if the condition number is infinite, and ill-conditioned if it is too large, where "too large" means roughly the precision of matrix entries.An estimate of the -norm condition number of a matrix can be computed in the Wolfram Language prior to Version 11.2 using LinearAlgebra`MatrixConditionNumber[m, p] for , 2, or , where omitting the is equivalent to specifying Infinity. A similar approximation for the condition number can be computed using LUDecomposition[mat][[-1]].

Refined alternating sign matrix conjecture

The numerators and denominators obtained by taking the ratios of adjacent terms in the triangular array of the number of "bordered" alternating sign matrices with a 1 at the top of column are, respectively, the numbers in the (2, 1)- and (1, 2)-Pascal triangles which are different from 1. This conjecture was proven by Zeilberger (1996).

Alternating sign matrix

An alternating sign matrix is a matrix of 0s, 1s, and s in which the entries in each row or column sum to 1 and the nonzero entries in each row and column alternate in sign. The first few for , 2, ... are shown below:(1)(2)(3)(4)Such matrices satisfy the additional property that s in a row or column must have a "outside" it (i.e., all s are "bordered" by s). The numbers of alternating sign matrices for , 2, ... are given by 1, 2, 7, 42, 429, 7436, 218348, ... (OEIS A005130).The conjecture that the number of is explicitly given by the formula(5)now proven to be true, was known as the alternating sign matrix conjecture. can be expressed in closed form as a complicated function of Barnes G-functions, but additional simplification is likely possible.A recurrence relation for is given by(6)where is the gamma function.Let be the number of alternating sign matrices with one in the top row occurring in the th position. Then(7)The result(8)for..

Generalized vandermonde matrix

A generalized Vandermonde matrix of two sequences and where is an increasing sequence of positive integers and is an increasing sequence of nonnegative integers of the same length is the outer product of and with multiplication operation given by the power function. The generalized Vandermonde matrix can be implemented in the Wolfram Language as Vandermonde[a_List?VectorQ, b_List?VectorQ] := Outer[Power, a, b] /; Equal @@ Length /@ {a, b}A generalized Vandermonde matrix is a minor of a Vandermonde matrix. Alternatively, it has the same form as a Vandermonde matrix , where is an increasing sequence of positive integers, except now is any increasing sequence of nonnegative integers. In the special case of a Vandermonde matrix, .While there is no general formula for the determinant of a generalized Vandermonde matrix, its determinant is always positive. Since any minor of a generalized Vandermonde matrix is also a generalized Vandermonde..

Random matrix

A random matrix is a matrix of given type and size whoseentries consist of random numbers from some specified distribution.Random matrix theory is cited as one of the "modern tools" used in Catherine'sproof of an important result in prime number theory in the 2005 film Proof.For a real matrix with elements having a standard normal distribution, the expected number of real eigenvalues is given by(1)(2)where is a hypergeometric function and is a beta function (Edelman et al. 1994, Edelman and Kostlan 1994). has asymptotic behavior(3)Let be the probability that there are exactly real eigenvalues in the complex spectrum of the matrix. Edelman (1997) showed that(4)which is the smallest probability of all s. The entire probability function of the number of expected real eigenvalues in the spectrum of a Gaussian real random matrix was derived by Kanzieper and Akemann (2005) as(5)where(6)(7)In (6), the summation runs over all partitions..

Gaussian elimination

Gaussian elimination is a method for solving matrixequations of the form(1)To perform Gaussian elimination starting with the system of equations(2)compose the "augmented matrix equation"(3)Here, the column vector in the variables is carried along for labeling the matrix rows. Now, perform elementary row operations to put the augmented matrix into the upper triangular form(4)Solve the equation of the th row for , then substitute back into the equation of the st row to obtain a solution for , etc., according to the formula(5)In the Wolfram Language, RowReduce performs a version of Gaussian elimination, with the equation being solved by GaussianElimination[m_?MatrixQ, v_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{m, v}]]LU decomposition of a matrix is frequently usedas part of a Gaussian elimination process for solving a matrix equation.A matrix that has undergone Gaussian elimination is said to be in echelonform.For..

Unitary matrix

A square matrix is a unitary matrix if(1)where denotes the conjugate transpose and is the matrix inverse. For example,(2)is a unitary matrix.Unitary matrices leave the length of a complex vectorunchanged.For real matrices, unitary is the same as orthogonal. In fact, there are some similarities between orthogonal matrices and unitary matrices. The rows of a unitary matrix are a unitary basis. That is, each row has length one, and their Hermitian inner product is zero. Similarly, the columns are also a unitary basis. In fact, given any unitary basis, the matrix whose rows are that basis is a unitary matrix. It is automatically the case that the columns are another unitary basis.A matrix can be tested to see if it is unitary using the Wolfram Language function: UnitaryQ[m_List?MatrixQ] := ([email protected] @ m . m == IdentityMatrix @ Length @ m)The definition of a unitary matrix guarantees that(3)where is the identity matrix. In particular,..

Permanent

The permanent is an analog of a determinant where all the signs in the expansion by minors are taken as positive. The permanent of a matrix is the coefficient of in(1)(Vardi 1991). Another equation is the Ryser formula(2)where the sum is over all subsets of , and is the number of elements in (Vardi 1991). Muir (1960, p. 19) uses the notation to denote a permanent.The permanent can be implemented in the WolframLanguage as Permanent[m_List] := With[{v = Array[x, Length[m]]}, Coefficient[Times @@ (m.v), Times @@ v] ]The computation of permanents has been studied fairly extensively in algebraic complexity theory. The complexity of the best-known algorithms grows as the exponent of the matrix size (Knuth 1998, p. 499), which would appear to be very surprising, given the permanent's similarity to the tractable determinant. Computation of the permanent is #P-complete (i.e, sharp-P complete; Valiant 1979).If is a unitary matrix, then(3)(Minc..

Orthogonal matrix

A matrix is an orthogonal matrix if(1)where is the transpose of and is the identity matrix. In particular, an orthogonal matrix is always invertible, and(2)In component form,(3)This relation make orthogonal matrices particularly easy to compute with, since the transpose operation is much simpler than computing an inverse.For example,(4)(5)are orthogonal matrices. A matrix can be tested to see if it is orthogonal using the Wolfram Language code: OrthogonalMatrixQ[m_List?MatrixQ] := (Transpose[m].m == IdentityMatrix @ Length @ m)The rows of an orthogonal matrix are an orthonormal basis. That is, each row has length one, and are mutually perpendicular. Similarly, the columns are also an orthonormal basis. In fact, given any orthonormal basis, the matrix whose rows are that basis is an orthogonal matrix. It is automatically the case that the columns are another orthonormal basis.The orthogonal matrices are precisely those matrices..

Normal matrix

A square matrix is a normal matrix ifwhere is the commutator and denotes the conjugate transpose. For example, the matrixis a normal matrix, but is not a Hermitian matrix. A matrix can be tested to see if it is normal using Wolfram Language function: NormalMatrixQ[a_List?MatrixQ] := Module[ {b = Conjugate @ Transpose @ a}, a. b === b. a ]Normal matrices arise, for example, from a normalequation.The normal matrices are the matrices which are unitarily diagonalizable, i.e., is a normal matrix iff there exists a unitary matrix such that is a diagonal matrix. All Hermitian matrices are normal but have real eigenvalues, whereas a general normal matrix has no such restriction on its eigenvalues. All normal matrices are diagonalizable, but not all diagonalizable matrices are normal.The following table gives the number of normal square matrices of given types for orders , 2, ....typeOEIScountsA0555472, 8, 68, 1124, ...A0555482, 12, 80, 2096, ...A0555493,..

Symmetric matrix

A symmetric matrix is a square matrix that satisfies(1)where denotes the transpose, so . This also implies(2)where is the identity matrix. For example,(3)is a symmetric matrix. Hermitian matrices are a useful generalization of symmetric matrices for complex matricesA matrix can be tested to see if it is symmetric using the Wolfram Language code: SymmetricQ[m_List?MatrixQ] := (m === Transpose[m])Written explicitly, the elements of a symmetric matrix have the form(4)The symmetric part of any matrixmay be obtained from(5)A matrix is symmetric if it can be expressed in the form(6)where is an orthogonal matrix and is a diagonal matrix. This is equivalent to the matrix equation(7)which is equivalent to(8)for all , where . Therefore, the diagonal elements of are the eigenvalues of , and the columns of are the corresponding eigenvectors.The numbers of symmetric matrices of order on symbols are , , , , ..., . Therefore, for (0,1)-matrices, the..

Minor

A minor is the reduced determinant of a determinant expansion that is formed by omitting the th row and th column of a matrix . So, for example, the minor of the above matrix is given byThe th minor can be computed in the Wolfram Language using Minor[m_List?MatrixQ, {i_Integer, j_Integer}] := Det[Drop[Transpose[Drop[Transpose[m], {j}]], {i}]]The Wolfram Language's built-in Minors[m] command instead gives the minors of a matrix obtained by deleting the st row and st column of , while Minors[m, k] gives the th minors of . The Minor code above therefore corresponds to th entry of MinorMatrix[m_List?MatrixQ] := Map[Reverse, Minors[m], {0, 1}]i.e., the definition Minors[m, i, j] is equivalent to MinorMatrix[m][[i, j]].

Matrix minimal polynomial

The minimal polynomial of a matrix is the monic polynomial in of smallest degree such that(1)The minimal polynomial divides any polynomial with and, in particular, it divides the characteristic polynomial.If the characteristic polynomial factorsas(2)then its minimal polynomial is given by(3)for some positive integers , where the satisfy .For example, the characteristic polynomial of the zero matrix is , whiles its minimal polynomial is . However, the characteristic polynomial and minimal polynomial of(4)are both .The following Wolfram Language code will find the minimal polynomial for the square matrix in the variable . MatrixMinimalPolynomial[a_List?MatrixQ,x_]:=Module[ { i, n=1, qu={}, mnm={Flatten[IdentityMatrix[Length[a]]]} }, While[Length[qu]==0, AppendTo[mnm,Flatten[MatrixPower[a,n]]]; qu=NullSpace[Transpose[mnm]]; n++ ]; First[qu].Table[x^i,{i,0,n-1}] ]..

Companion matrix

The companion matrix to a monic polynomial(1)is the square matrix(2)with ones on the subdiagonal and the last column given by the coefficients of . Note that in the literature, the companion matrix is sometimes defined with the rows and columns switched, i.e., the transpose of the above matrix.When is the standard basis, a companion matrix satisfies(3)for , as well as(4)including(5)The matrix minimal polynomial of the companion matrix is therefore , which is also its characteristic polynomial.Companion matrices are used to write a matrix in rational canonical form. In fact, any matrix whose matrix minimal polynomial has polynomial degree is similar to the companion matrix for . The rational canonical form is more interesting when the degree of is less than .The following Wolfram Language command gives the companion matrix for a polynomial in the variable . CompanionMatrix[p_, x_] := Module[ {n, w = CoefficientList[p, x]}, w = -w/Last[w];..

Jacobian

Given a set of equations in variables , ..., , written explicitly as(1)or more explicitly as(2)the Jacobian matrix, sometimes simply called "the Jacobian" (Simon and Blume 1994) is defined by(3)The determinant of is the Jacobian determinant (confusingly, often called "the Jacobian" as well) and is denoted(4)The Jacobian matrix and determinant can be computed in the WolframLanguage using JacobianMatrix[f_List?VectorQ, x_List] := Outer[D, f, x] /; Equal @@ (Dimensions /@ {f, x}) JacobianDeterminant[f_List?VectorQ, x_List] := Det[JacobianMatrix[f, x]] /; Equal @@ (Dimensions /@ {f, x})Taking the differential(5)shows that is the determinant of the matrix , and therefore gives the ratios of -dimensional volumes (contents) in and ,(6)It therefore appears, for example, in the changeof variables theorem.The concept of the Jacobian can also be applied to functions in more than variables. For example, considering..

Special unitary matrix

A square matrix is a special unitary matrix if(1)where is the identity matrix and is the conjugate transpose matrix, and the determinant is(2)The first condition means that is a unitary matrix, and the second condition provides a restriction beyond a general unitary matrix, which may have determinant for any real number. For example,(3)is a special unitary matrix. A matrix can be tested to see if it is a special unitary matrix using the Wolfram Language function SpecialUnitaryQ[m_List?MatrixQ] := (Conjugate @ Transpose @ m . m == IdentityMatrix @ Length @ m&& Det[m] == 1)The special unitary matrices are closed under multiplication and the inverse operation, and therefore form a matrix group called the special unitary group .

Block diagonal matrix

A block diagonal matrix, also called a diagonal block matrix, is a square diagonal matrix in which the diagonal elements are square matrices of any size (possibly even ), and the off-diagonal elements are 0. A block diagonal matrix is therefore a block matrix in which the blocks off the diagonal are the zero matrices, and the diagonal matrices are square.Block diagonal matrices can be constructed out of submatrices in the WolframLanguage using the following code snippet: BlockDiagonalMatrix[b : {__?MatrixQ}] := Module[{r, c, n = Length[b], i, j}, {r, c} = Transpose[Dimensions /@ b]; ArrayFlatten[ Table[If[i == j, b[[i]], ConstantArray[0, {r[[i]], c[[j]]}]], {i, n}, {j, n} ] ] ]

Special orthogonal matrix

A square matrix is a special orthogonal matrix if(1)where is the identity matrix, and the determinant satisfies(2)The first condition means that is an orthogonal matrix, and the second restricts the determinant to (while a general orthogonal matrix may have determinant or ). For example,(3)is a special orthogonal matrix since(4)and its determinant is . A matrix can be tested to see if it is a special orthogonal matrix using the Wolfram Language code SpecialOrthogonalQ[m_List?MatrixQ] := (Transpose[m] . m == IdentityMatrix @ Length @ m&& Det[m] == 1)The special orthogonal matrices are closed under multiplication and the inverse operation, and therefore form a matrix group called the special orthogonal group .

Householder matrix

Householder (1953) first considered the matrix that now bears his name in the first couple of pages of his book. A Householder matrix for a real vector can be implemented in the Wolfram Language as: HouseholderMatrix[v_?VectorQ] := IdentityMatrix[Length[v]] - 2 Transpose[{v}] . {v} / (v.v)Trefethen and Bau (1997) gave an incorrect version of the formula for complex . D. Laurie gave a correct version by interpreting reflection along a given direction not as(1)where(2)is the projection onto the hyperplane orthogonal to (since this is in general not a unitary transformation), but as(3)Lehoucq (1996) independently gave an interpretation that still uses the formula , but choosing to be unitary.

Array

An array is a "list of lists" with the length of each level of list the same. The size (sometimes called the "shape") of a -dimensional array is then indicated as . The most common type of array encountered is the two-dimensional rectangular array having columns and rows. If , a square array results. Sometimes, the order of the elements in an array is significant (as in a matrix), whereas at other times, arrays which are equivalent modulo reflections (and rotations, in the case of a square array) are considered identical (as in a magic square or prime array).In the Wolfram Language, an array of depth is represented using nested lists, and can be generated using the command Array[a, i, j, ...]. Similarly, the dimensions of an array can be found using Dimensions[t], and the command ArrayQ[expr] tests if an expression is a full array. Taking for example t=Array[a,{2,2,2,3}]gives the depth-4 list {{{{a[1,1,1,1],a[1,1,1,2],a[1,1,1,3]},..

Antisymmetric matrix

An antisymmetric matrix is a square matrix thatsatisfies the identity(1)where is the matrix transpose. For example,(2)is antisymmetric. Antisymmetric matrices are commonly called "skew symmetric matrices" by mathematicians.A matrix may be tested to see if it is antisymmetric using the Wolfram Language function AntisymmetricQ[m_List?MatrixQ] := (m === -Transpose[m])In component notation, this becomes(3)Letting , the requirement becomes(4)so an antisymmetric matrix must have zeros on its diagonal. The general antisymmetric matrix is of the form(5)Applying to both sides of the antisymmetry condition gives(6)Any square matrix can be expressed as the sum of symmetric and antisymmetric parts. Write(7)But(8)(9)so(10)which is symmetric, and(11)which is antisymmetric.All antisymmetric matrices of odd dimension are singular. This follows from the fact that(12)So, by the properties of determinants,(13)(14)Therefore,..

Hermitian matrix

A square matrix is called Hermitian if it is self-adjoint. Therefore, a Hermitian matrix is defined as one for which(1)where denotes the conjugate transpose. This is equivalent to the condition(2)where denotes the complex conjugate. As a result of this definition, the diagonal elements of a Hermitian matrix are real numbers (since ), while other elements may be complex.Examples of Hermitian matrices include(3)and the Pauli matrices(4)(5)(6)Examples of Hermitian matrices include(7)An integer or real matrix is Hermitian iff it is symmetric. A matrix can be tested to see if it is Hermitian using the Wolfram Language function HermitianQ[m_List?MatrixQ] := (m === [email protected]@m)Hermitian matrices have real eigenvalues whose eigenvectors form a unitary basis. For real matrices, Hermitian is the same as symmetric.Any matrix which is not Hermitian can be expressed as the sum of a Hermitian matrix and a antihermitian matrix using(8)Let..

Antihermitian matrix

A square matrix is antihermitian if it satisfies(1)where is the adjoint. For example, the matrix(2)is an antihermitian matrix. Antihermitian matrices are often called "skew Hermitian matrices" by mathematicians.A matrix can be tested to see if it is antihermitian using the Wolfram Language function AntihermitianQ[m_List?MatrixQ] := (m === -Conjugate[Transpose[m]])The set of antihermitian matrices is a vector space, and the commutator(3)of two antihermitian matrices is antihermitian. Hence, the antihermitian matrices are a Lie algebra, which is related to the Lie group of unitary matrices. In particular, suppose is a path of unitary matrices through , i.e.,(4)for all , where is the adjoint and is the identity matrix. The derivative at of both sides must be equal so(5)That is, the derivative of at the identity must be antihermitian.The matrix exponential map of an antihermitianmatrix is a unitary matrix...

Piecewise linear function

A piecewise linear function is a function composed of some number of linear segments defined over an equal number of intervals, usually of equal size.For example, consider the function over the interval . If is approximated by a piecewise linear function over an increasing number of segments, e.g., 1, 2, 4, and 8, the accuracy of the approximation is seen to improve as the number of segments increases.In the first case, with a single segment, if we compute the Lagrangeinterpolating polynomial, the equation of the linear function results.The trapezoidal rule for numeric integrationis described in a similar manner.Piecewise linear functions are also key to some constructive derivations. The length of a "piece" is given by the(1)summing the length of a number of pieces gives(2)and taking the limit as , the sum becomes(3)which is simplify the usual arc length...

Vector space projection

If is a -dimensional subspace of a vector space with inner product , then it is possible to project vectors from to . The most familiar projection is when is the x-axis in the plane. In this case, is the projection. This projection is an orthogonal projection.If the subspace has an orthonormal basis thenis the orthogonal projection onto . Any vector can be written uniquely as , where and is in the orthogonal subspace .A projection is always a linear transformation and can be represented by a projection matrix. In addition, for any projection, there is an inner product for which it is an orthogonal projection.

Invertible linear map

An invertible linear transformation is a map between vector spaces and with an inverse map which is also a linear transformation. When is given by matrix multiplication, i.e., , then is invertible iff is a nonsingular matrix. Note that the dimensions of and must be the same.

Vector space orientation

An ordered vector basis for a finite-dimensional vector space defines an orientation. Another basis gives the same orientation if the matrix has a positive determinant, in which case the basis is called oriented.Any vector space has two possible orientations since the determinant of an nonsingular matrix is either positive or negative. For example, in , is one orientation and is the other orientation. In three dimensions, the cross product uses the right-hand rule by convention, reflecting the use of the canonical orientation as .An orientation can be given by a nonzero element in the top exterior power of , i.e., . For example, gives the canonical orientation on and gives the other orientation.Some special vector space structures imply an orientation. For example, if is a symplectic form on , of dimension , then gives an orientation. Also, if is a complex vector space, then as a real vector space of dimension , the complex structure gives an orientation...

Orthogonal set

A subset of a vector space , with the inner product , is called orthogonal if when . That is, the vectors are mutually perpendicular.Note that there is no restriction on the lengths of the vectors. If the vectors inan orthogonal set all have length one, then they are orthonormal.The notion of orthogonal makes sense for an abstract vector space over any field as long as there is a symmetric quadratic form. The usual orthogonal sets and groups in Euclidean space can be generalized, with applications to special relativity, differential geometry, and abstract algebra.

Hermitian inner product

A Hermitian inner product on a complex vector space is a complex-valued bilinear form on which is antilinear in the second slot, and is positive definite. That is, it satisfies the following properties, where denotes the complex conjugate of . 1. 2. 3. 4. 5. 6. , with equality only if The basic example is the form(1)on , where and . Note that by writing , it is possible to consider , in which case is the Euclidean inner product and is a nondegenerate alternating bilinear form, i.e., a symplectic form. Explicitly, in , the standard Hermitian form is expressed below.(2)A generic Hermitian inner product has its real part symmetric positive definite, and its imaginary part symplectic by properties 5 and 6. A matrix defines an antilinear form, satisfying 1-5, by iff is a Hermitian matrix. It is positive definite (satisfying 6) when is a positive definite matrix. In matrix form,(3)and the canonical Hermitian inner product is when is the identity matrix...

Vector space flag

An ascending chain of subspaces of a vector space. If is an -dimensional vector space, a flag of is a filtration(1)where all inclusions are strict. Hence(2)so that . If equality holds, then for all , and the flag is called complete or full. In this case it is a composition series of .A full flag can be constructed by fixing a basis of , and then taking for all .A flag of any length can be obtained from a full flag by taking out some of the subspaces. Conversely, every flag can be completed to a full flag by inserting suitable subspaces. In general, this can be done in different ways. The following flag of (3)can be completed by switching in any line of the -plane passing through the origin. Two different full flags are, for example,(4)and(5)Schubert varieties are projective varieties definedfrom flags...

Orthogonal complement

The orthogonal complement of a subspace of the vector space is the set of vectors which are orthogonal to all elements of . For example, the orthogonal complement of the space generated by two non proportional vectors , of the real space is the subspace formed by all normal vectors to the plane spanned by and .In general, any subspace of an inner product space has an orthogonal complement andThis property extends to any subspace of a space equipped with a symmetric or differential -form or a Hermitian form which is nonsingular on .

Haar condition

A set of vectors in Euclidean -space is said to satisfy the Haar condition if every set of vectors is linearly independent (Cheney 1999). Expressed otherwise, each selection of vectors from such a set is a basis for -space. A system of functions satisfying the Haar condition is sometimes termed a Tchebycheff system (Cheney 1999).

Fundamental theorem of linear algebra

Given an matrix , the fundamental theorem of linear algebra is a collection of results relating various properties of the four fundamental matrix subspaces of . In particular: 1. and where here, denotes the range or column space of , denotes its transpose, and denotes its null space. 2. The null space is orthogonal to the row space . 1. There exist orthonormal bases for both the column space and the row space of . 4. With respect to the orthonormal bases of and , is diagonal. The third item on this list stems from Gram-Schmidt Orthonormalization; the fourth item stems from the singular value decomposition of . Also, while different, the first item is reminiscent of the rank-nullity theorem.The above figure summarizes some of the interactions between the four fundamental matrix subspaces for a real matrix including whether the spaces in question are subspaces of or , which subspaces are orthogonal to one another, and how the matrix maps various vectors..

Lorentzian inner product

The standard Lorentzian inner product on is given by(1)i.e., for vectors and ,(2) endowed with the metric tensor induced by the above Lorentzian inner product is known as Minkowski space and is denoted .The Lorentzian inner product on is nothing more than a specific case of the more general Lorentzian inner product on -dimensional Lorentzian space with metric signature : In this more general environment, the inner product of two vectors and has the form(3)The Lorentzian inner product of two such vectors is sometimes denoted to avoid the possible confusion of the angled brackets with the standard Euclidean inner product (Ratcliffe 2006). Analogous presentations can be made if the equivalent metric signature (i.e., for Minkowski space) is used.The four-dimensional Lorentzian inner product is used as a tool in special relativity, namely as a measurement which is independent of reference frame and which replaces the typical Euclidean notion..

Fundamental matrix subspaces

Given a real matrix , there are four associated vector subspaces which are known colloquially as its fundamental subspaces, namely the column spaces and the null spaces of the matrices and its transpose . These four subspaces are important for a number of reasons, one of which is the crucial role they play in the so-called fundamental theorem of linear algebra.The above figure summarizes some of the interactions between the four fundamental matrix subspaces for a real matrix including whether the spaces in question are subspaces of or , which subspaces are orthogonal to one another, and how the matrix maps various vectors relative to the subspace in which lies.In the event that , all four of the fundamental matrix subspaces are lines in . In this case, one can write for some vectors , whereby the directions of the four lines correspond to , , , and . An elementary fact from linear algebra is that these directions are also represented by the eigenvectors..

Linearly independent

Two or more functions, equations, or vectors , , ..., which are not linearly dependent, i.e., cannot be expressed in the formwith , , ... constants which are not all zero are said to be linearly independent.A set of vectors , , ..., is linearly independent iff the matrix rank of the matrix is , in which case is diagonalizable.

Fredholm's theorem

Fredholm's theorem states that, if is an matrix, then the orthogonal complement of the row space of is the null space of , and the orthogonal complement of the column space of is the null space of ,(1)(2)

Orthonormal basis

A subset of a vector space , with the inner product , is called orthonormal if when . That is, the vectors are mutually perpendicular. Moreover, they are all required to have length one: .An orthonormal set must be linearly independent, and so it is a vector basis for the space it spans. Such a basis is called an orthonormal basis.The simplest example of an orthonormal basis is the standard basis for Euclidean space . The vector is the vector with all 0s except for a 1 in the th coordinate. For example, . A rotation (or flip) through the origin will send an orthonormal set to another orthonormal set. In fact, given any orthonormal basis, there is a rotation, or rotation combined with a flip, which will send the orthonormal basis to the standard basis. These are precisely the transformations which preserve the inner product, and are called orthogonal transformations.Usually when one needs a basis to do calculations, it is convenient to use an orthonormal..

Linear combination

A sum of the elements from some set with constant coefficients placed in front of each. For example, a linear combination of the vectors , , and is given bywhere , , and are constants.

Alternating multilinear form

An alternating multilinear form on a real vector space is a multilinear form(1)such that(2)for any index . For example,(3)is an alternating form on .An alternating multilinear form is defined on a module in a similar way, by replacing with the ring.

Jacobi's theorem

Let be an -rowed minor of the th order determinant associated with an matrix in which the rows , , ..., are represented with columns , , ..., . Define the complementary minor to as the -rowed minor obtained from by deleting all the rows and columns associated with and the signed complementary minor to to be(1)Let the matrix of cofactors be given by(2)with and the corresponding -rowed minors of and , then it is true that(3)

Determinant theorem

Given a square matrix , the following are equivalent: 1. . 2. The columns of are linearly independent. 3. The rows of are linearly independent. 4. Range() = . 5. Null() = . 6. has a matrix inverse.

Jacobi's determinant identity

Let(1)(2)where and are matrices. Then(3)The proof follows from equating determinants on the two sides of the block matrices(4)where is the identity matrix and is the zero matrix.

Determinant identities

A useful determinant identity allows the following determinant to be expressed using vector operations,(1)Additional interesting determinant identities include(2)(Muir 1960, p. 39),(3)(Muir 1960, p. 41),(4)(Muir 1960, p. 42),(5)(Muir 1960, p. 47),(6)(Muir 1960, p. 42),(7)(Muir 1960, p. 44), and the Cayley-Mengerdeterminant(8)(Muir 1960, p. 46), which is closely related to Heron'sformula.

Subscribe to our updates
79 345 subscribers already with us
Math Subcategories
Check the price
for your project