# Tag

Sort by:

### Infinitesimal matrix change

Let , , and be square matrices with small, and define(1)where is the identity matrix. Then the inverse of is approximately(2)This can be seen by multiplying(3)(4)(5)(6)Note that if we instead let , and look for an inverse of the form , we obtain(7)(8)(9)(10)In order to eliminate the term, we require . However, then , so so there can be no inverse of this form.The exact inverse of can be found as follows.(11)so(12)Using a general matrix inverse identity then gives(13)

### Matrix power

The power of a matrix for a nonnegative integer is defined as the matrix product of copies of ,A matrix to the zeroth power is defined to be the identity matrix of the same dimensions, . The matrix inverse is commonly denoted , which should not be interpreted to mean .

### Strassen formulas

The usual number of scalar operations (i.e., the total number of additions and multiplications) required to perform matrix multiplication is(1)(i.e., multiplications and additions). However, Strassen (1969) discovered how to multiply two matrices in(2)scalar operations, where is the logarithm to base 2, which is less than for . For a power of two (), the two parts of (2) can be written(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)so (◇) becomes(13)Two matrices can therefore be multiplied(14)(15)with only(16)scalar operations (as it turns out, seven of them are multiplications and 18 are additions). Define the seven products (involving a total of 10 additions) as(17)(18)(19)(20)(21)(22)(23)Then the matrix product is given using the remaining eight additions as(24)(25)(26)(27)(Strassen 1969, Press et al. 1989).Matrix inversion of a matrix to yield can also be done in fewer operations than expected using the formulas(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(Strassen..

### Matrix multiplication

The product of two matrices and is defined as(1)where is summed over for all possible values of and and the notation above uses the Einstein summation convention. The implied summation over repeated indices without the presence of an explicit sum sign is called Einstein summation, and is commonly used in both matrix and tensor analysis. Therefore, in order for matrix multiplication to be defined, the dimensions of the matrices must satisfy(2)where denotes a matrix with rows and columns. Writing out the product explicitly,(3)where(4)(5)(6)(7)(8)(9)(10)(11)(12)Matrix multiplication is associative, as can be seenby taking(13)where Einstein summation is again used. Now, since , , and are scalars, use the associativity of scalar multiplication to write(14)Since this is true for all and , it must be true that(15)That is, matrix multiplication is associative. Equation(13) can therefore be written(16)without ambiguity. Due to associativity,..

### Hermitian part

Every complex matrix can be broken into a Hermitianpart(i.e., is a Hermitian matrix) and an antihermitian part(i.e., is an antihermitian matrix). Here, denotes the adjoint.

### Square root method

The square root method is an algorithm which solves the matrixequation(1)for , with a symmetric matrix and a given vector. Convert to a triangular matrix such that(2)where is the transpose. Then(3)(4)so(5)giving the equations(6)(7)(8)(9)(10)These give(11)(12)(13)(14)(15)giving from . Now solve for in terms of the s and ,(16)(17)(18)which gives(19)(20)(21)Finally, find from the s and ,(22)(23)(24)giving the desired solution,(25)(26)(27)

### Matrix inverse

The inverse of a square matrix , sometimes called a reciprocal matrix, is a matrix such that(1)where is the identity matrix. Courant and Hilbert (1989, p. 10) use the notation to denote the inverse matrix.A square matrix has an inverse iff the determinant (Lipschutz 1991, p. 45). The so-called invertible matrix theorem is major result in linear algebra which associates the existence of a matrix inverse with a number of other equivalent properties. A matrix possessing an inverse is called nonsingular, or invertible.The matrix inverse of a square matrix may be taken in the Wolfram Language using the function Inverse[m].For a matrix(2)the matrix inverse is(3)(4)For a matrix(5)the matrix inverse is(6)A general matrix can be inverted using methods such as the Gauss-Jordan elimination, Gaussian elimination, or LU decomposition.The inverse of a product of matrices and can be expressed in terms of and . Let(7)Then(8)and(9)Therefore,(10)so(11)where..

### Matrix exponential

The power series that defines the exponential map also defines a map between matrices. In particular,(1)(2)(3)converges for any square matrix , where is the identity matrix. The matrix exponential is implemented in the Wolfram Language as MatrixExp[m].The Kronecker sum satisfies the nice property(4)(Horn and Johnson 1994, p. 208).Matrix exponentials are important in the solution of systems of ordinary differential equations (e.g., Bellman 1970).In some cases, it is a simple matter to express the matrix exponential. For example, when is a diagonal matrix, exponentiation can be performed simply by exponentiating each of the diagonal elements. For example, given a diagonal matrix(5)The matrix exponential is given by(6)Since most matrices are diagonalizable,it is easiest to diagonalize the matrix before exponentiating it.When is a nilpotent matrix, the exponential is given by a matrix polynomial because some power of vanishes...

### Matrix equality

Two matrices and are said to be equal iff(1)for all . Therefore,(2)while(3)

### Matrix direct sum

The matrix direct sum of matrices constructs a block diagonal matrix from a set of square matrices, i.e.,(1)(2)

Denote the sum of two matrices and (of the same dimensions) by . The sum is defined by adding entries with the same indicesover all and . For example,Matrix addition is therefore both commutative andassociative.

### Kronecker sum

The Kronecker sum is the matrix sum defined by(1)where and are square matrices of order and , respectively, is the identity matrix of order , and denotes the Kronecker product.For example, the Kronecker sum of two matrices and is given by(2)The Kronecker sum satisfies the nice property(3)where denotes a matrix exponential.

### Antihermitian part

Every complex matrix can be broken into a Hermitian part(i.e., is a Hermitian matrix) and an antihermitian part(i.e., is an antihermitian matrix). Here, denotes the conjugate transpose.

### Natural norm

Let be a vector norm of a vector such thatThen is a matrix norm which is said to be the natural norm induced (or subordinate) to the vector norm . For any natural norm,where is the identity matrix. The natural matrix norms induced by the L1-norm, L2-norm, and L-infty-norm are called the maximum absolute column sum norm, spectral norm, and maximum absolute row sum norm, respectively.

### Maximum absolute row sum norm

The natural norm induced by the L-infty-normis called the maximum absolute row sum norm and is defined byfor a matrix . This matrix norm is implemented as Norm[m, Infinity].

### Symmetric part

Any square matrix can be written as a sum(1)where(2)is a symmetric matrix known as the symmetric part of and(3)is an antisymmetric matrix known as the antisymmetric part of . Here, is the transpose.The symmetric part of a tensor is denoted using parenthesesas(4)(5)Symbols for the symmetric and antisymmetric partsof tensors can be combined, for example(6)(Wald 1984, p. 26).

### Antisymmetric part

Any square matrix can be written as a sum(1)where(2)is a symmetric matrix known as the symmetric part of and(3)is an antisymmetric matrix known as the antisymmetric part of . Here, is the transpose.Any rank-2 tensor can be written as a sum of symmetricand antisymmetric parts as(4)The antisymmetric part of a tensor is sometimes denoted using the special notation(5)For a general rank- tensor,(6)where is the permutation symbol. Symbols for the symmetric and antisymmetric parts of tensors can be combined, for example(7)(Wald 1984, p. 26).

### Stability matrix

Given a system of two ordinary differential equations(1)(2)let and denote fixed points with , so(3)(4)Then expand about so(5)(6)To first-order, this gives(7)where the matrix, or its generalization to higher dimension, is called the stability matrix. Analysis of the eigenvalues (and eigenvectors) of the stability matrix characterizes the type of fixed point.

### Kronecker product

Given an matrix and a matrix , their Kronecker product , also called their matrix direct product, is an matrix with elements defined by(1)where(2)(3)For example, the matrix direct product of the matrix and the matrix is given by the following matrix,(4)(5)The matrix direct product is implemented in the Wolfram Language as KroneckerProduct[a, b].The matrix direct product gives the matrix of the linear transformation induced by the vector space tensor product of the original vector spaces. More precisely, suppose that(6)and(7)are given by and . Then(8)is determined by(9)

### Vandermonde matrix

A Vandermonde matrix is a type of matrix that arises in the polynomial least squares fitting, Lagrange interpolating polynomials (Hoffman and Kunze p. 114), and the reconstruction of a statistical distribution from the distribution's moments (von Mises 1964; Press et al. 1992, p. 83). A Vandermonde matrix of order is of the form(Press et al. 1992; Meyer 2000, p. 185). A Vandermonde matrix is sometimes also called an alternant matrix (Marcus and Minc 1992, p. 15). Note that some authors define the transpose of this matrix as the Vandermonde matrix (Marcus and Minc 1992, p. 15; Golub and Van Loan 1996; Aldrovandi 2001, p. 193).The solution of an Vandermonde matrix equation requires operations. The determinants of Vandermonde matrices have a particularly simple form...

### Hamiltonian matrix

A complex matrix is said to be Hamiltonian if(1)where is the matrix of the form(2) is the identity matrix, and denotes the conjugate transpose of a matrix . An analogous definition holds in the case of real matrices by requiring that be symmetric, i.e., by replacing by in (1).Note that this criterion specifies precisely how a Hamiltonian matrix must look. Indeed, every Hamiltonian matrix (here assumed to have complex entries) must have the form(3)where satisfy and . This characterization holds for having strictly real entries as well by replacing all instances of the conjugate transpose operator in (1) by the transpose operator instead.

### Upper triangular matrix

A triangular matrix of the form(1)Written explicitly,(2)An upper triangular matrix with elements f[i,j] above the diagonal could be formed in versions of the Wolfram Language prior to 6 using UpperDiagonalMatrix[f, n], which could be run after first loading LinearAlgebra$MatrixManipulation$.A strictly upper triangular matrix is an upper triangular matrix having 0s along the diagonal as well, i.e., for .

### Positive matrix

A positive matrix is a real or integer matrix for which each matrix element is a positive number, i.e., for all , .Positive matrices are therefore a subset of nonnegativematrices.Note that a positive matrix is not the same as a positivedefinite matrix.

### Gram matrix

Given a set of vectors (points in ), the Gram matrix is the matrix of all possible inner products of , i.e.,where denotes the transpose.The Gram matrix determines the vectors up to isometry.

### Unimodular matrix

A unimodular matrix is a real square matrix with determinant (Born and Wolf 1980, p. 55; Goldstein 1980, p. 149). More generally, a matrix with elements in the polynomial domain of a field is called unimodular if it has an inverse whose elements are also in . A matrix is therefore unimodular iff its determinant is a unit of (MacDuffee 1943, p. 137).The matrix inverse of a unimodular realmatrix is another unimodular matrix.There are an infinite number of unimodular matrices not containing any 0s or . One parametric family is(1)Specific examples of unimodular matrices having small positive integer entries include(2)(Guy 1989, 1994).The th power of a unimodular matrix(3)is given by(4)where(5)and the are Chebyshev polynomials of the second kind,(6)(Born and Wolf 1980, p. 67)...

### Tridiagonal matrix

A square matrix with nonzero elements only on the diagonal and slots horizontally or vertically adjacent the diagonal (i.e., along the subdiagonal and superdiagonal),Computing the determinant of such a matrix requires only (as opposed to ) arithmetic operations (Acton 1990, p. 332). Efficient solution of the matrix equation for , where is a tridiagonal matrix, can be performed in the Wolfram Language using LinearSolve on , represented as a SparseArray.

### Positive definite matrix

An complex matrix is called positive definite if(1)for all nonzero complex vectors , where denotes the conjugate transpose of the vector . In the case of a real matrix , equation (1) reduces to(2)where denotes the transpose. Positive definite matrices are of both theoretical and computational importance in a wide variety of applications. They are used, for example, in optimization algorithms and in the construction of various linear regression models (Johnson 1970).A linear system of equations with a positive definite matrix can be efficiently solved using the so-called Cholesky decomposition. A positive definite matrix has at least one matrix square root. Furthermore, exactly one of its matrix square roots is itself positive definite.A necessary and sufficient condition for a complex matrix to be positive definite is that the Hermitian part(3)where denotes the conjugate transpose, be positive definite. This means that a real matrix..

### Triangular matrix

An upper triangular matrix is defined by(1)Written explicitly,(2)A lower triangular matrix is defined by(3)Written explicitly,(4)

### Permutation matrix

A permutation matrix is a matrix obtained by permuting the rows of an identity matrix according to some permutation of the numbers 1 to . Every row and column therefore contains precisely a single 1 with 0s everywhere else, and every permutation corresponds to a unique permutation matrix. There are therefore permutation matrices of size , where is a factorial.The permutation matrices of order two are given by(1)and of order three are given by(2)A permutation matrix is nonsingular, and the determinant is always . In addition, a permutation matrix satisfies(3)where is a transpose and is the identity matrix.Applied to a matrix , gives with rows interchanged according to the permutation vector , and gives with the columns interchanged according to the given permutation vector.Interpreting the 1s in an permutation matrix as rooks gives an allowable configuration of nonattacking rooks on an chessboard. However, the permutation matrices provide..

### Toeplitz matrix

Given numbers , where , ..., , 0, 1, ..., , a Toeplitz matrix is a matrix which has constant values along negative-sloping diagonals, i.e., a matrix of the formMatrix equations ofthe formcan be solved with operations. Typical problems modelled by Toeplitz matrices include the numerical solution of certain differential and integral equations (regularization of inverse problems), the computation of splines, time series analysis, signal and image processing, Markov chains, and queuing theory (Bini 1995).

### Periodic matrix

A square matrix such that the matrix power for a positive integer is called a periodic matrix. If is the least such integer, then the matrix is said to have period . If , then and is called idempotent.

### Equivalent matrix

Two matrices and are equal to each other, written , if they have the same dimensions and the same elements for , ..., and , ..., .Gradshteyn and Ryzhik (2000) call an matrix "equivalent" to another matrix ifffor and any suitable nonsingular and matrices, respectively.

### Pauli matrices

The Pauli matrices, also called the Pauli spin matrices, are complex matrices that arise in Pauli's treatment of spin in quantum mechanics. They are defined by(1)(2)(3)(Condon and Morse 1929, p. 213; Gasiorowicz 1974, p. 232; Goldstein 1980, p. 156; Liboff 1980, p. 453; Arfken 1985, p. 211; Griffiths 1987, p. 115; Landau and Lifschitz 1991, p. 204; Landau 1996, p. 224).The Pauli matrices are implemented in the Wolfram Language as PauliMatrix[n], where , 2, or 3.The Pauli spin matrices satisfy the identities(4)(5)(6)where is the identity matrix, is the Kronecker delta, is the permutation symbol, the leading is the imaginary unit (not the index ), and Einstein summation is used in (6) to sum over the index (Arfken 1985, p. 211; Griffiths 1987, p. 139; Landau and Lifschitz 1991, pp. 204-205).The Pauli matrices plus the identity matrix form a complete set, so any matrix..

### Elementary matrix

An matrix is an elementary matrix if it differs from the identity by a single elementary row or column operation.

### Submatrix

A submatrix of an matrix (with , ) is a matrix formed by taking a block of the entries of this size from the original matrix.

### Strictly upper triangular matrix

A strictly upper triangular matrix is an upper triangular matrix having 0s along the diagonal as well as the lower portion, i.e., a matrix such that for . Written explicitly,

### Doubly stochastic matrix

A doubly stochastic matrix is a matrix such that andis some field for all and . In other words, both the matrix itself and its transpose are stochastic.The following tables give the number of distinct doubly stochastic matrices (and distinct nonsingular doubly stochastic matrices) over for small .doubly stochastic matrices over 21, 2, 16, 512, ...31, 3, 81, ...41, 4, 256, ...doubly stochastic nonsingular matrices over 21, 2, 6, 192, ...31, 2, 54, ...41, 4, 192, ...Horn (1954) proved that if , where and are complex -vectors, is doubly stochastic, and , , ..., are any complex numbers, then lies in the convex hull of all the points , , where is the set of all permutations of . Sherman (1955) also proved the converse.Birkhoff (1946) proved that any doubly stochastic matrix is in the convex hull of permutation matrices for . There are several proofs and extensions of this result (Dulmage and Halperin 1955, Mendelsohn and Dulmage 1958, Mirsky 1958, Marcus..

### Strictly lower triangular matrix

A lower triangular matrix having 0s along the diagonal as well as the upper portion, i.e., a matrix such that for . Written explicitly,

### Doubly nonnegative matrix

A doubly nonnegative matrix is a real positive semidefinite square matrix with nonnegative entries. Any doubly nonnegative matrix of order can be expressed as a Gram matrix of vectors (where is the rank of ), with each pair of vectors possessing a nonnegative inner product, i.e., . Every completely positive matrix is doubly nonnegative.

### Nonsingular matrix

A square matrix that is not singular, i.e., one that has a matrix inverse. Nonsingular matrices are sometimes also called regular matrices. A square matrix is nonsingular iff its determinant is nonzero (Lipschutz 1991, p. 45). For example, there are 6 nonsingular (0,1)-matrices:The following table gives the numbers of nonsingular matrices for certain matrix classes.matrix typeOEIScounts for , 2, ...-matricesA0569892, 48, 11808, ...-matricesA0569902, 8, 192, 22272, ...-matricesA0551651, 6, 174, 22560, ...

### Dirac matrices

The Dirac matrices are a class of matrices which arise in quantum electrodynamics. There are a variety of different symbols used, and Dirac matrices are also known as gamma matrices or Dirac gamma matrices.The Dirac matrices may be implemented in a future version of the Wolfram Language as DiracGammaMatrix[n], where , 2, 3, 4, or 5.The Dirac matrices are defined as the matrices(1)(2)where are the () Pauli matrices, is the identity matrix, , 2, 3, and is the Kronecker product. Explicitly, this set of Dirac matrices is then given by(3)(4)(5)(6)(7)(8)(9)These matrices satisfy the anticommutation identities(10)(11)where is the Kronecker delta, the commutation identity(12)and are cyclic under permutations of indices(13)(14)A total of 16 Dirac matrices can be defined via(15)for , 1, 2, 3 and where (Arfken 1985, p. 212). These matrices satisfy 1. , where is the determinant, 2. , 3. , where denotes the conjugate transpose, making them Hermitian,..

### Nonpositive matrix

A nonpositive matrix is a real or integer matrix for which each matrix element is a nonpositive number, i.e., for all , .Nonpositive matrices are therefore a superset of negative matrices.

### Square matrix

A matrix for which horizontal and vertical dimensions are the same (i.e., an matrix). In versions of the Wolfram Language prior to 6, a matrix could be tested to see if it was square using SquareMatrixQ[m] after loading the package LinearAlgebra$MatrixManipulation$.Consider the numbers of matrices on distinct symbols. The number of distinct matrices modulo rotations and reflections for , 2, ... are given by 1, 3, 45360, ... (OEIS A086829).Consider an matrix consisting of the integers 1 to arranged in any order. Then the maximal determinants possible for , 2, ... are 1, 10, 412, 40800, 6839492, ... (OEIS A085000).Consider an matrix with single copies of the digits 1, 2, ..., and the rest of the elements zero. Then the triangle of matrices with digits , 1, ..., that are rotationally and reflectively distinct is 1, 1; 1, 1, 2, 3, 3; 1, 3, 12, 66, 378, 1890, 7560, 22680, 45360, 45360; ... (OEIS A087074)...

### Nonnegative matrix

A nonnegative matrix is a real or integer matrix for which each matrix element is a nonnegative number, i.e., for all , .Nonnegative matrices are therefore a superset of positive matrices.Nonnegative matrices are important in a variety of applications and have a number of attractive mathematical properties. Together with positive semidefinite matrices, they therefore serve as a natural generalization of nonnegative real numbers (Johnson 1981). The most fundamental properties of nonnegative matrices require fairly advanced mathematics and were established by Perron (1907) and Frobenius (1912).

### Diagonally dominant matrix

A square matrix is called diagonally dominant if for all . is called strictly diagonally dominant if for all .A strictly diagonally dominant matrix is nonsingular. A symmetric diagonally dominant real matrix with nonnegative diagonal entries is positive semidefinite.If a matrix is strictly diagonally dominant and all its diagonal elements are positive, then the real parts of its eigenvalues are positive; if all its diagonal elements are negative, then the real parts of its eigenvalues are negative. These results follow from the Gershgorin circle theorem.

### Nilpotent matrix

There are two equivalent definitions for a nilpotent matrix. 1. A square matrix whose eigenvaluesare all 0. 2. A square matrix such that is the zero matrix for some positive integer matrix power , known as the index (Ayres 1962, p. 11).

### Diagonal matrix

A diagonal matrix is a square matrix of the form(1)where is the Kronecker delta, are constants, and , 2, ..., , with no implied summation over indices. The general diagonal matrix is therefore of the form(2)often denoted . The diagonal matrix with elements can be computed in the Wolfram Language using DiagonalMatrix[l].The determinant of a diagonal matrix given by is . This means that , so for , 2, ..., the first few values are 1, 2, 6, 24, 120, 720, 5040, 40320, ... (OEIS A000142).Given a matrix equation ofthe form(3)multiply through to obtain(4)Since in general, for , this can be true only if off-diagonal components vanish. Therefore, must be diagonal.Given a diagonal matrix , the matrix power can be computed simply by taking each element to the power in question,(5)(6)Similarly, a matrix exponential can be performedsimply by exponentiating each of the diagonal elements,(7)..

### Negative matrix

A negative matrix is a real or integer matrix for which each matrix element is a negative number, i.e., for all , .Negative matrices are therefore a subset of nonpositivematrices.

### Coxeter matrix

An square matrix with(1)(2)for all , ..., .

### Covariance matrix

Given sets of variates denoted , ..., , the first-order covariance matrix is defined bywhere is the mean. Higher order matrices are given byAn individual matrix element is called the covariance of and .

### Monotonic matrix

A monotonic matrix of order is an matrix in which every element is either 0 or contains a number from the set subject to the conditions 1. The filled-in elements in each row are strictly increasing, 2. The filled-in elements in each column are strictly decreasing, and 3. Positive slope condition: for two filled-in cells with same element, the one further right is in an earlier row. The numbers of distinct monotonic matrices of orders , 2, ... are 2, 19, 712, ... (OEIS A086976). For example, the monotonic matrices of order 2 areThe maximum numbers of cells occupied in an matrix for , 2, ... are given by 1, 2, 5, 8, 11, 14, 19, ... (OEIS A070214).

### Copositive matrix

A copositive matrix is a real square matrix that makes the corresponding quadratic formnonnegative for all nonnegative -vectors . Copositive matrices have many applications including in the field of control theory.The cone consisting of all copositive matrices of order is just the dual cone of all completely positive matrices of order .

### Minimal matrix

A matrix with 0 determinant whose determinant becomes nonzero when any element on or below the diagonal is changed from 0 to 1. An example isThere are minimal special matrices of size .

### Conjugate matrix

A conjugate matrix is a matrix obtained from a given matrix by taking the complex conjugate of each element of (Courant and Hilbert 1989, p. 9), i.e.,The notation is sometimes also used, which can lead to confusion since this symbol is also used to denote the conjugate transpose.Using a matrix in a similarity transformation of a given matrix is also known as conjugating by . In this case, and are known as similar matrices.

### Singular matrix

A square matrix that does not have a matrix inverse. A matrix is singular iff its determinant is 0. For example, there are 10 singular (0,1)-matrices:The following table gives the numbers of singular matrices for certain matrix classes.matrix typeOEIScounts for , 2, ...-matricesA0579811, 33, 7875, 15099201, ...-matricesA0579820, 8, 320, 43264, ...-matricesA0467471, 10, 338, 42976, ...

### Matrix equation

Nonhomogeneous matrix equations of the form(1)can be solved by taking the matrix inverse to obtain(2)This equation will have a nontrivial solution iff the determinant . In general, more numerically stable techniques of solving the equation include Gaussian elimination, LU decomposition, or the square root method.For a homogeneous matrix equation(3)to be solved for the s, consider the determinant(4)Now multiply by , which is equivalent to multiplying the first column (or any column) by ,(5)The value of the determinant is unchanged if multiples of columns are added to other columns. So add times column 2, ..., and times column to the first column to obtain(6)But from the original matrix, each of the entries in thefirst columns is zero since(7)so(8)Therefore, if there is an which is a solution, the determinant is zero. This is also true for , ..., , so the original homogeneous system has a nontrivial solution for all s only if the determinant..

### Congruent matrices

Two square matrices and are called congruent if there exists a nonsingular matrix such thatwhere is the transpose.

### Similar matrices

Two square matrices and that are related by(1)where is a square nonsingular matrix are said to be similar. A transformation of the form is called a similarity transformation, or conjugation by . For example,(2)and(3)are similar under conjugation by(4)Similar matrices represent the same linear transformation after a change of basis (for the domain and range simultaneously). Recall that a matrix corresponds to a linear transformation, and a linear transformation corresponds to a matrix after choosing a basis ,(5)Changing the basis changes the coefficients of the matrix,(6)If uses the standard basis vectors, then is the matrix using the basis vectors .

### Complex matrix

A matrix whose elements may contain complexnumbers.The matrix product of two complex matrices is given by(1)where(2)(3)(4)(5)(6)(7)(8)(9)Hadamard (1893) proved that the determinant of any complex matrix with entries in the closed unit disk satisfies(10)(Hadamard's maximum determinant problem), with equality attained by the Vandermonde matrix of the roots of unity (Faddeev and Sominskii 1965, p. 331; Brenner 1972). The first few values for , 2, ... are 1, 2, , 16, , 216, ....Studying the maximum possible eigenvalue norms for random complex matrices is computationally intractable. Although average properties of the distribution of can be determined, finding the maximum value corresponds to determining if the set of matrices contains a singular matrix, which has been proven to be an NP-complete problem (Poljak and Rohn 1993, Kaltofen 2000). The above plots show the distributions for , , and matrix eigenvalue norms for elements..

### Lower triangular matrix

A triangular matrix of the form(1)Written explicitly,(2)A lower triangular matrix with elements f[i,j] below the diagonal could be formed in versions of the Wolfram Language prior to 6 using LowerDiagonalMatrix[f, n], which could be run after first loading LinearAlgebra$MatrixManipulation$.A strictly lower triangular matrix is a lower triangular matrix having 0s along the diagonal as well, i.e., for .

### Least common multiple matrix

Let be a set of distinct positive integers. Then the matrix having the least common multiple of and as its th entry is called the least common multiple matrix on .

### Kac matrix

The tridiagonal matrix (also called the Clement matrix) defined byThe eigenvalues are for , 1, ..., .

### Commuting matrices

Two matrices and which satisfy(1)under matrix multiplication are said tobe commuting.In general, matrix multiplication is not commutative. Furthermore, in general there is no matrix inverse even when . Finally, can be zero even without or . And when , we may still have , a simple example of which is provided by(2)(3)for which(4)but(5)(Taussky 1957).

### Jacobi rotation matrix

A matrix used in the Jacobi transformation method of diagonalizing matrices. The Jacobi rotation matrix contains 1s along the diagonal, except for the two elements in rows and columns and . In addition, all off-diagonal elements are zero except the elements and . The rotation angle for an initial matrix is chosen such thatThen the corresponding Jacobi rotation matrix which annihilates the off-diagonal element is

### Indefinite matrix

An complex matrix is called indefinite if nonzero vectors and exist such thatwhere denotes the conjugate transpose.

### Reducible matrix

A square matrix is called reducible if the indices 1, 2, ..., can be divided into two disjoint nonempty sets , , ..., and , , ..., (with ) such thatfor , 2, ..., and , 2, ..., .A matrix is reducible if and only if it can be placed into block upper-triangular form by simultaneous row/column permutations. In addition, a matrix is reducible if and only if its associated digraph is not strongly connected.A square matrix that is not reducible is said tobe irreducible.

### Redheffer matrix

A Redheffer matrix is a square -matrix with elements equal to 1 if or ( divides ), and 0 otherwise. For , 2, ..., the first few Redheffer matrices areThe Redheffer matrix of order 255 is illustrated above.The determinant of the Redheffer matrix is equal to the Mertens function . For , 2, ..., the first few values are therefore 1, 0, , , , , , , , ... (OEIS A002321).The number of unit eigenvalues of the Redheffer matrix for is equal to(Vaughan 1993, 1996; Trott 2004, p. 57), giving the first few values as 1, 0,1, 1, 2, 3, 4, 4, 5, 6, 7, 8, 9, 10, 11, ... (OEIS A083058).

### Hessenberg matrix

A Hessenberg matrix is a matrix of the formHessenberg matrices were first investigated by Karl Hessenberg (1904-1959), a German engineer whose dissertation investigated the computation of eigenvalues and eigenvectors of linear operators.

### Roth's removal rule

If the matrices , , , and satisfythenwhere is the identity matrix.

### Horn's theorem

Let(1)and(2)Then there exists an Hermitian matrix with eigenvalues and diagonal elements iff(3)for all and with equality for . The theorem is sometimes also known as Schur's theorem.

### Matrix trace

The trace of an square matrix is defined to be(1)i.e., the sum of the diagonal elements. The matrix trace is implemented in the Wolfram Language as Tr[list]. In group theory, traces are known as "group characters."For square matrices and , it is true that(2)(3)(4)(Lang 1987, p. 40), where denotes the transpose. The trace is also invariant under a similarity transformation(5)(Lang 1987, p. 64). Since(6)(where Einstein summation is used here to sumover repeated indices), it follows that(7)(8)(9)(10)(11)where is the Kronecker delta.The trace of a product of two square matrices is independent of the order of the multiplication since(12)(13)(14)(15)(16)(again using Einstein summation). Therefore, the trace of the commutator of and is given by(17)The trace of a product of three or more square matrices, on the other hand, is invariant only under cyclic permutations of the order of multiplication of the matrices,..

### Matrix signature

A real, nondegenerate symmetric matrix , and its corresponding symmetric bilinear form , has signature if there is a nondegenerate matrix such that is a diagonal matrix with 1s and s. In this case, is a diagonal quadratic form.For example,gives a symmetric bilinear form called the Lorentzian inner product, which has signature .

### Diagonalizable matrix

An -matrix is said to be diagonalizable if it can be written on the formwhere is a diagonal matrix with the eigenvalues of as its entries and is a nonsingular matrix consisting of the eigenvectors corresponding to the eigenvalues in .The diagonalization theorem states that an matrix is diagonalizable if and only if has linearly independent eigenvectors, i.e., if the matrix rank of the matrix formed by the eigenvectors is . Matrix diagonalization (and most other forms of matrix decomposition) are particularly useful when studying linear transformations, discrete dynamical systems, continuous systems, and so on.All normal matrices are diagonalizable, but not all diagonalizable matrices are normal. The following table gives counts of diagonalizable matrices of various kinds where the elements of may be real or complex.matrix typeOEIScounts for , 2, ...(-1,0,1)-matrixA0914703, 65, 15627, ...(-1,1)-matrixA0914712, 12, 464, 50224,..

### Diagonal

A diagonal of a square matrix which is traversed in the "southeast" direction. "The" diagonal (or "main diagonal," or "principal diagonal," or "leading diagonal") of an square matrix is the diagonal from to .The solidus symbol / used to denote division (e.g., ) is sometimes also known as a diagonal.

### Superdiagonal

The superdiagonal of a square matrix is the set of elements directly above the elements comprising the diagonal. For example, in the following matrix, the diagonal elements are denoted and the superdiagonal elements are denoted ,

### Subdiagonal

The subdiagonal of a square matrix is the set of elements directly under the elements comprising the diagonal. For example, in the following matrix, the diagonal elements are denoted and the subdiagonals are denoted ,

### Sturmian separation theorem

Let be a sequence of symmetric matrices of increasing order with , 2, ..., and , 2, ..., . Let be the th eigenvalue of for , 2, ..., , where the ordering is given byThen it follows that

### Characteristic equation

The characteristic equation is the equation which is solved to find a matrix's eigenvalues, also called the characteristic polynomial. For a general matrix , the characteristic equation in variable is defined by(1)where is the identity matrix and is the determinant of the matrix . Writing out explicitly gives(2)so the characteristic equation is given by(3)The solutions of the characteristic equation are called eigenvalues, and are extremely important in the analysis of many problems in mathematics and physics. The polynomial left-hand side of the characteristic equation is known as the characteristic polynomial.

### Skew diagonal

A diagonal of a square matrix which is traversed in the "northeast" direction. "The" skew diagonal (or "secondary diagonal") of an square matrix is the skew diagonal from to .

### Segre characteristic

A set of integers that give the orders of the blocks in a Jordan canonical form, with those integers corresponding to submatrices containing the same latent root bracketed together. For example, the Segre characteristic ofis [(21)31(21)] (Frazer et al. 1955, p. 94).

### Augmented matrix

An augmented matrix is a matrix obtained by adjoining a row or column vector, or sometimes another matrix with the same vertical dimension.The most common use of an augmented matrix is in the application of Gaussian elimination to solve a matrix equation of the form(1)by forming the column-appended augmented matrix(2)Augmented matrices can also be used to find a matrix inverse of by forming(3)with the identity matrix of appropriate dimension, then using elementary row operations to reduce the result to the form(4)The payoff matrix in gametheory is a case of a more complicated augmented matrix .

### Jacobi transformation

A method of matrix diagonalization using Jacobi rotation matrices . It consists of a sequence of orthogonal similarity transformations of the formeach of which eliminates one off-diagonal element. Each application of affects only rows and columns of , and the sequence of such matrices is chosen so as to eliminate the off-diagonal elements.

### Woodbury formula

The Woodbury formulais a formula that allows a perturbed matrix to be computed for a change to a given matrix , where is an identity matrix. This formula expands the scope of application of the Sherman-Morrison formula. In particular, in the Woodbury formula, and can be any (dimensionally compatible) matrices rather than simple row or column vectors and for the Sherman-Morrison formula.

### Invertible matrix theorem

The invertible matrix theorem is a theorem in linear algebra which gives a series of equivalent conditions for an square matrix to have an inverse. In particular, is invertible if and only if any (and hence, all) of the following hold: 1. is row-equivalent to the identity matrix . 2. has pivot positions. 3. The equation has only the trivial solution . 4. The columns of form a linearly independent set. 5. The linear transformation is one-to-one. 6. For each column vector , the equation has a unique solution. 7. The columns of span . 8. The linear transformation is a surjection. 9. There is an matrix such that . 10. There is an matrix such that . 11. The transpose matrix is invertible. 12. The columns of form a basis for . 13. The column space of is equal to . 14. The dimension of the column space of is . 15. The rank of is . 16. The null space of is . 17. The dimension of the null space of is 0. 18. fails to be an eigenvalue of . 19. The determinant of is not zero. 20. The orthogonal..

### Positive eigenvalued matrix

The numbers of positive definite matrices of given types are summarized in the following table. For example, the three positive eigenvalues (0,1)-matrices areall of which have eigenvalue 1 with degeneracy oftwo.matrix typeOEIScounts(0,1)-matrixA0030241, 3, 25, 543, 29281, ...(-1,0,1)-matrixA0855061, 5, 133, 18905, ...Weisstein's conjecture proposed that positive eigenvalued -matrices were in one-to-one correspondence with labeled acyclic digraphs on nodes, and this was subsequently proved by McKay et al. (2003, 2004). Counts of both are therefore given by the beautiful recurrence equationwith (Harary and Palmer 1973, p. 19; Robinson 1973, pp. 239-273).

### Immanant

For an matrix, let denote any permutation , , ..., of the set of numbers 1, 2, ..., , and let be the character of the symmetric group corresponding to the partition . Then the immanant is defined aswhere the summation is over the permutations of the symmetric group and

### Wigner's semicircle law

Let be a real symmetric matrix of large order having random elements that for are independently distributed with equal densities, equal second moments , and th moments bounded by constants independent of , , and . Further, let be the number of eigenvalues of that lie in the interval for real . Then(Wigner 1955, 1958). This law was first observed by Wigner (1955) for certain special classes of random matrices arising in quantum mechanical investigations.The distribution of eigenvalues of a symmetric random matrix with entries chosen from a standard normal distribution is illustrated above for a random matrix.Note that a large real symmetric matrix with random entries taken from a uniform distribution also obeys the semicircle law with the exception that it also possesses exactly one large eigenvalue...

### Girko's circular law

Let be (possibly complex) eigenvalues of a set of random real matrices with entries independent and taken from a standard normal distribution. Then as , is uniformly distributed on the unit disk in the complex plane. For small , the distribution shows a concentration along the real line accompanied by a slight paucity above and below (with interesting embedded structure). However, as , the concentration about the line disappears and the distribution becomes truly uniform.

### Block matrix

A block matrix is a matrix that is defined using smallermatrices, called blocks. For example,(1)where , , , and are themselves matrices, is a block matrix. In the specific example(2)(3)(4)(5)therefore, it is the matrix(6)Block matrices can be created using ArrayFlatten.When two block matrices have the same shape and their diagonal blocks are square matrices, then they multiply similarly to matrix multiplication. For example,(7)Note that the usual rules of matrix multiplication hold even when the block matrices are not square (assuming that the block sizes correspond). Of course, matrix multiplication is in general not commutative, so in these block matrix multiplications, it is important to keep the correct order of the multiplications.When the blocks are square matrices, the set of invertible block matrices is a group isomorphic to the general linear group , where is the ring of square matrices...

### Orthogonal decomposition

The orthogonal decomposition of a vector in is the sum of a vector in a subspace of and a vector in the orthogonal complement to .The orthogonal decomposition theorem states that if is a subspace of , then each vector in can be written uniquely in the formwhere is in and is in . In fact, if is any orthogonal basis of , thenand .Geometrically, is the orthogonal projection of onto the subspace and is a vector orthogonal to

A Hadamard matrix is a type of square (-1,1)-matrix invented by Sylvester (1867) under the name of anallagmatic pavement, 26 years before Hadamard (1893) considered them. In a Hadamard matrix, placing any two columns or rows side by side gives half the adjacent cells the same sign and half the other sign. When viewed as pavements, cells with 1s are colored black and those with s are colored white. Therefore, the Hadamard matrix must have white squares (s) and black squares (1s).A Hadamard matrix of order is a solution to Hadamard's maximum determinant problem, i.e., has the maximum possible determinant (in absolute value) of any complex matrix with elements (Brenner and Cummings 1972), namely . An equivalent definition of the Hadamard matrices is given by(1)where is the identity matrix.A Hadamard matrix of order corresponds to a Hadamard design (, , ), and a Hadamard matrix gives a graph on vertices known as a Hadamard graphA complete set of Walsh..

### Parodi's theorem

The eigenvalues satisfying , where is the characteristic polynomial, lie in the unions of the disks

### Ostrowski's theorem

Let be a matrix with positive coefficients and be the positive eigenvalue in the Frobenius theorem, then the eigenvalues satisfy the inequality(1)where(2)(3)and , 2, ..., .

### Wielandt's theorem

Let the matrix satisfy the conditions of the Perron-Frobenius theorem and the matrix satisfyfor , 2, ..., . Then any eigenvalue of satisfies the inequality with the equality sign holding only when there exists an matrix (where is the Kronecker delta) and

### Mccoy's theorem

If two square matrices and are simultaneously upper triangularizable by similarity transforms, then there is an ordering , ..., of the eigenvalues of and , ..., of the eigenvalues of so that, given any polynomial in noncommuting variables, the eigenvalues of are the numbers with , ..., . McCoy's theorem states the converse: If every polynomial exhibits the correct eigenvalues in a consistent ordering, then and are simultaneously triangularizable.

### Generalized eigenvector

A generalized eigenvector for an matrix is a vector for whichfor some positive integer . Here, denotes the identity matrix. The smallest such is known as the generalized eigenvector order of the generalized eigenvector. In this case, the value is the generalized eigenvalue to which is associated and the linear span of all generalized eigenvectors associated to some generalized eigenvalue is known as the generalized eigenspace for .As the name suggests, generalized eigenvectors are generalizations of eigenvectors of the usual kind; more precisely, an eigenvector is a generalized eigenvector corresponding to .Generalized eigenvectors are of particular importance for matrices which fail to be diagonalizable. Indeed, for such matrices, at least one eigenvalue has geometric multiplicity larger than its algebraic multiplicity, thereby implying that the collection of linearly independent eigenvectors of is "too small"..

### Matrix spectrum

The eigenvalues of a matrix are called its spectrum, and are denoted . If , then the determinant of is given by

### Frobenius theorem

Let be a matrix with positive coefficients so that for all , 2, ..., , then has a positive eigenvalue , and all its eigenvalues lie on the closed disk

Let be an matrix with complex or real elements with eigenvalues , ..., . Then the spectral radius of isi.e., the largest absolute value (or complexmodulus) of its eigenvalues.The spectral radius of a finite graph is defined as the largest absolute value of its graph spectrum, i.e., the largest absolute value of the graph eigenvalues (eigenvalues of the adjacency matrix).

### Spectral norm

The natural norm induced by the L2-norm. Let be the conjugate transpose of the square matrix , so that , then the spectral norm is defined as the square root of the maximum eigenvalue of , i.e.,(1)(2)This matrix norm is implemented as Norm[m,2].

### Lyapunov's second theorem

If all the eigenvalues of a real matrix have real parts, then to an arbitrary negative definite quadratic form with there corresponds a positive definite quadratic form such that if one takesthen and satisfy

### Seifert matrix

Given a Seifert form , choose a basis , ..., for as a -module so every element is uniquely expressible as(1)with integer. Then define the Seifert matrix as the integer matrix with entries(2)For example, the right-hand trefoil knot has Seifertmatrix(3)A Seifert matrix is not a knot invariant, but it can be used to distinguish between different Seifert surfaces for a given knot.

### Hilbert matrix

A matrix with elements(1)for , 2, ..., . Hilbert matrices are implemented in the Wolfram Language by HilbertMatrix[m, n]. The figure above shows a plot of the Hilbert matrix with elements colored according to their values.Hilbert matrices whose entries are specified as machine-precision numbers are difficult to invert using numerical techniques.The determinants for the first few values of for , 2, ... are given by one divided by 1, 12, 2160, 6048000, 266716800000, ... (OEIS A005249). The terms of sequence have the closed form(2)(3)(4)where is the Glaisher-Kinkelin constant and is the Barnes G-function. The numerical values are given in the following table.det()1123456The elements of the matrix inverse of the Hilbert matrix are given analytically by(5)(Choi 1983, Richardson 1999).

### Condition number

The ratio of the largest to smallest singular value in the singular value decomposition of a matrix. The base- logarithm of is an estimate of how many base- digits are lost in solving a linear system with that matrix. In other words, it estimates worst-case loss of precision. A system is said to be singular if the condition number is infinite, and ill-conditioned if it is too large, where "too large" means roughly the precision of matrix entries.An estimate of the -norm condition number of a matrix can be computed in the Wolfram Language prior to Version 11.2 using LinearAlgebra`MatrixConditionNumber[m, p] for , 2, or , where omitting the is equivalent to specifying Infinity. A similar approximation for the condition number can be computed using LUDecomposition[mat][[-1]].

### Grothendieck's constant

Let be an real square matrix with such that(1)for all real numbers , , ..., and , , ..., such that . Then Grothendieck showed that there exists a constant satisfying(2)for all vectors and in a Hilbert space with norms and . The Grothendieck constant is the smallest possible value of . For example, the best values known for small are(3)(4)(5)(Krivine 1977, 1979; König 1992; Finch 2003, p. 236).Now consider the limit(6)which is related to Khinchin's constant and sometimes also denoted . Krivine (1977) showed that(7)and postulated that(8)(OEIS A088367). The conjecture was refuted in 2011 by Yury Makarychev, Mark Braverman, Konstantin Makarychev, and Assaf Naor, who showed that is strictly less than Krivine's bound (Makarychev 2011).Similarly, if the numbers and and matrix are taken as complex, then a similar set of constants may be defined. These are known to satisfy(9)(10)(11)(Krivine 1977, 1979; König 1990, 1992; Finch..

### Combinatorial matrix theory

Combinatorial matrix theory is a rich branch of mathematics that combines combinatorics, graph theory, and linear algebra. It includes the theory of matrices with prescribed combinatorial properties, including permanents and Latin squares. It also comprises combinatorial proof of classical algebraic theorems such as Cayley-Hamilton theorem.As mentioned in Season 4 episodes 407 "Primacy" and 412 "Power" of the television crime drama NUMB3RS, professor Amita Ramanujan's primary teaching interest is combinatorial matrix theory.

### Refined alternating sign matrix conjecture

The numerators and denominators obtained by taking the ratios of adjacent terms in the triangular array of the number of "bordered" alternating sign matrices with a 1 at the top of column are, respectively, the numbers in the (2, 1)- and (1, 2)-Pascal triangles which are different from 1. This conjecture was proven by Zeilberger (1996).

### Alternating sign matrix

An alternating sign matrix is a matrix of 0s, 1s, and s in which the entries in each row or column sum to 1 and the nonzero entries in each row and column alternate in sign. The first few for , 2, ... are shown below:(1)(2)(3)(4)Such matrices satisfy the additional property that s in a row or column must have a "outside" it (i.e., all s are "bordered" by s). The numbers of alternating sign matrices for , 2, ... are given by 1, 2, 7, 42, 429, 7436, 218348, ... (OEIS A005130).The conjecture that the number of is explicitly given by the formula(5)now proven to be true, was known as the alternating sign matrix conjecture. can be expressed in closed form as a complicated function of Barnes G-functions, but additional simplification is likely possible.A recurrence relation for is given by(6)where is the gamma function.Let be the number of alternating sign matrices with one in the top row occurring in the th position. Then(7)The result(8)for..

### Fourier matrix

The square matrix with entries given by(1)for , 1, 2, ..., , where i is the imaginary number , and normalized by to make it a unitary. The Fourier matrix is given by(2)and the matrix by(3)(4)In general,(5)with(6)where is the identity matrix and is the diagonal matrix with entries 1, , ..., . Note that the factorization (which is the basis of the fast Fourier transform) has two copies of in the center factor matrix.

### Generalized vandermonde matrix

A generalized Vandermonde matrix of two sequences and where is an increasing sequence of positive integers and is an increasing sequence of nonnegative integers of the same length is the outer product of and with multiplication operation given by the power function. The generalized Vandermonde matrix can be implemented in the Wolfram Language as Vandermonde[a_List?VectorQ, b_List?VectorQ] := Outer[Power, a, b] /; Equal @@ Length /@ {a, b}A generalized Vandermonde matrix is a minor of a Vandermonde matrix. Alternatively, it has the same form as a Vandermonde matrix , where is an increasing sequence of positive integers, except now is any increasing sequence of nonnegative integers. In the special case of a Vandermonde matrix, .While there is no general formula for the determinant of a generalized Vandermonde matrix, its determinant is always positive. Since any minor of a generalized Vandermonde matrix is also a generalized Vandermonde..

### Random matrix

A random matrix is a matrix of given type and size whoseentries consist of random numbers from some specified distribution.Random matrix theory is cited as one of the "modern tools" used in Catherine'sproof of an important result in prime number theory in the 2005 film Proof.For a real matrix with elements having a standard normal distribution, the expected number of real eigenvalues is given by(1)(2)where is a hypergeometric function and is a beta function (Edelman et al. 1994, Edelman and Kostlan 1994). has asymptotic behavior(3)Let be the probability that there are exactly real eigenvalues in the complex spectrum of the matrix. Edelman (1997) showed that(4)which is the smallest probability of all s. The entire probability function of the number of expected real eigenvalues in the spectrum of a Gaussian real random matrix was derived by Kanzieper and Akemann (2005) as(5)where(6)(7)In (6), the summation runs over all partitions..

### Gaussian elimination

Gaussian elimination is a method for solving matrixequations of the form(1)To perform Gaussian elimination starting with the system of equations(2)compose the "augmented matrix equation"(3)Here, the column vector in the variables is carried along for labeling the matrix rows. Now, perform elementary row operations to put the augmented matrix into the upper triangular form(4)Solve the equation of the th row for , then substitute back into the equation of the st row to obtain a solution for , etc., according to the formula(5)In the Wolfram Language, RowReduce performs a version of Gaussian elimination, with the equation being solved by GaussianElimination[m_?MatrixQ, v_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{m, v}]]LU decomposition of a matrix is frequently usedas part of a Gaussian elimination process for solving a matrix equation.A matrix that has undergone Gaussian elimination is said to be in echelonform.For..

### Unitary matrix

A square matrix is a unitary matrix if(1)where denotes the conjugate transpose and is the matrix inverse. For example,(2)is a unitary matrix.Unitary matrices leave the length of a complex vectorunchanged.For real matrices, unitary is the same as orthogonal. In fact, there are some similarities between orthogonal matrices and unitary matrices. The rows of a unitary matrix are a unitary basis. That is, each row has length one, and their Hermitian inner product is zero. Similarly, the columns are also a unitary basis. In fact, given any unitary basis, the matrix whose rows are that basis is a unitary matrix. It is automatically the case that the columns are another unitary basis.A matrix can be tested to see if it is unitary using the Wolfram Language function: UnitaryQ[m_List?MatrixQ] := ([email protected] @ m . m == IdentityMatrix @ Length @ m)The definition of a unitary matrix guarantees that(3)where is the identity matrix. In particular,..

### Orthogonal matrix

A matrix is an orthogonal matrix if(1)where is the transpose of and is the identity matrix. In particular, an orthogonal matrix is always invertible, and(2)In component form,(3)This relation make orthogonal matrices particularly easy to compute with, since the transpose operation is much simpler than computing an inverse.For example,(4)(5)are orthogonal matrices. A matrix can be tested to see if it is orthogonal using the Wolfram Language code: OrthogonalMatrixQ[m_List?MatrixQ] := (Transpose[m].m == IdentityMatrix @ Length @ m)The rows of an orthogonal matrix are an orthonormal basis. That is, each row has length one, and are mutually perpendicular. Similarly, the columns are also an orthonormal basis. In fact, given any orthonormal basis, the matrix whose rows are that basis is an orthogonal matrix. It is automatically the case that the columns are another orthonormal basis.The orthogonal matrices are precisely those matrices..

### Normal matrix

A square matrix is a normal matrix ifwhere is the commutator and denotes the conjugate transpose. For example, the matrixis a normal matrix, but is not a Hermitian matrix. A matrix can be tested to see if it is normal using Wolfram Language function: NormalMatrixQ[a_List?MatrixQ] := Module[ {b = Conjugate @ Transpose @ a}, a. b === b. a ]Normal matrices arise, for example, from a normalequation.The normal matrices are the matrices which are unitarily diagonalizable, i.e., is a normal matrix iff there exists a unitary matrix such that is a diagonal matrix. All Hermitian matrices are normal but have real eigenvalues, whereas a general normal matrix has no such restriction on its eigenvalues. All normal matrices are diagonalizable, but not all diagonalizable matrices are normal.The following table gives the number of normal square matrices of given types for orders , 2, ....typeOEIScountsA0555472, 8, 68, 1124, ...A0555482, 12, 80, 2096, ...A0555493,..

### Symmetric matrix

A symmetric matrix is a square matrix that satisfies(1)where denotes the transpose, so . This also implies(2)where is the identity matrix. For example,(3)is a symmetric matrix. Hermitian matrices are a useful generalization of symmetric matrices for complex matricesA matrix can be tested to see if it is symmetric using the Wolfram Language code: SymmetricQ[m_List?MatrixQ] := (m === Transpose[m])Written explicitly, the elements of a symmetric matrix have the form(4)The symmetric part of any matrixmay be obtained from(5)A matrix is symmetric if it can be expressed in the form(6)where is an orthogonal matrix and is a diagonal matrix. This is equivalent to the matrix equation(7)which is equivalent to(8)for all , where . Therefore, the diagonal elements of are the eigenvalues of , and the columns of are the corresponding eigenvectors.The numbers of symmetric matrices of order on symbols are , , , , ..., . Therefore, for (0,1)-matrices, the..

### Matrix minimal polynomial

The minimal polynomial of a matrix is the monic polynomial in of smallest degree such that(1)The minimal polynomial divides any polynomial with and, in particular, it divides the characteristic polynomial.If the characteristic polynomial factorsas(2)then its minimal polynomial is given by(3)for some positive integers , where the satisfy .For example, the characteristic polynomial of the zero matrix is , whiles its minimal polynomial is . However, the characteristic polynomial and minimal polynomial of(4)are both .The following Wolfram Language code will find the minimal polynomial for the square matrix in the variable . MatrixMinimalPolynomial[a_List?MatrixQ,x_]:=Module[ { i, n=1, qu={}, mnm={Flatten[IdentityMatrix[Length[a]]]} }, While[Length[qu]==0, AppendTo[mnm,Flatten[MatrixPower[a,n]]]; qu=NullSpace[Transpose[mnm]]; n++ ]; First[qu].Table[x^i,{i,0,n-1}] ]..

### Companion matrix

The companion matrix to a monic polynomial(1)is the square matrix(2)with ones on the subdiagonal and the last column given by the coefficients of . Note that in the literature, the companion matrix is sometimes defined with the rows and columns switched, i.e., the transpose of the above matrix.When is the standard basis, a companion matrix satisfies(3)for , as well as(4)including(5)The matrix minimal polynomial of the companion matrix is therefore , which is also its characteristic polynomial.Companion matrices are used to write a matrix in rational canonical form. In fact, any matrix whose matrix minimal polynomial has polynomial degree is similar to the companion matrix for . The rational canonical form is more interesting when the degree of is less than .The following Wolfram Language command gives the companion matrix for a polynomial in the variable . CompanionMatrix[p_, x_] := Module[ {n, w = CoefficientList[p, x]}, w = -w/Last[w];..

### Special unitary matrix

A square matrix is a special unitary matrix if(1)where is the identity matrix and is the conjugate transpose matrix, and the determinant is(2)The first condition means that is a unitary matrix, and the second condition provides a restriction beyond a general unitary matrix, which may have determinant for any real number. For example,(3)is a special unitary matrix. A matrix can be tested to see if it is a special unitary matrix using the Wolfram Language function SpecialUnitaryQ[m_List?MatrixQ] := (Conjugate @ Transpose @ m . m == IdentityMatrix @ Length @ m&& Det[m] == 1)The special unitary matrices are closed under multiplication and the inverse operation, and therefore form a matrix group called the special unitary group .

### Block diagonal matrix

A block diagonal matrix, also called a diagonal block matrix, is a square diagonal matrix in which the diagonal elements are square matrices of any size (possibly even ), and the off-diagonal elements are 0. A block diagonal matrix is therefore a block matrix in which the blocks off the diagonal are the zero matrices, and the diagonal matrices are square.Block diagonal matrices can be constructed out of submatrices in the WolframLanguage using the following code snippet: BlockDiagonalMatrix[b : {__?MatrixQ}] := Module[{r, c, n = Length[b], i, j}, {r, c} = Transpose[Dimensions /@ b]; ArrayFlatten[ Table[If[i == j, b[[i]], ConstantArray[0, {r[[i]], c[[j]]}]], {i, n}, {j, n} ] ] ]

### Special orthogonal matrix

A square matrix is a special orthogonal matrix if(1)where is the identity matrix, and the determinant satisfies(2)The first condition means that is an orthogonal matrix, and the second restricts the determinant to (while a general orthogonal matrix may have determinant or ). For example,(3)is a special orthogonal matrix since(4)and its determinant is . A matrix can be tested to see if it is a special orthogonal matrix using the Wolfram Language code SpecialOrthogonalQ[m_List?MatrixQ] := (Transpose[m] . m == IdentityMatrix @ Length @ m&& Det[m] == 1)The special orthogonal matrices are closed under multiplication and the inverse operation, and therefore form a matrix group called the special orthogonal group .

### Householder matrix

Householder (1953) first considered the matrix that now bears his name in the first couple of pages of his book. A Householder matrix for a real vector can be implemented in the Wolfram Language as: HouseholderMatrix[v_?VectorQ] := IdentityMatrix[Length[v]] - 2 Transpose[{v}] . {v} / (v.v)Trefethen and Bau (1997) gave an incorrect version of the formula for complex . D. Laurie gave a correct version by interpreting reflection along a given direction not as(1)where(2)is the projection onto the hyperplane orthogonal to (since this is in general not a unitary transformation), but as(3)Lehoucq (1996) independently gave an interpretation that still uses the formula , but choosing to be unitary.

### Array

An array is a "list of lists" with the length of each level of list the same. The size (sometimes called the "shape") of a -dimensional array is then indicated as . The most common type of array encountered is the two-dimensional rectangular array having columns and rows. If , a square array results. Sometimes, the order of the elements in an array is significant (as in a matrix), whereas at other times, arrays which are equivalent modulo reflections (and rotations, in the case of a square array) are considered identical (as in a magic square or prime array).In the Wolfram Language, an array of depth is represented using nested lists, and can be generated using the command Array[a, i, j, ...]. Similarly, the dimensions of an array can be found using Dimensions[t], and the command ArrayQ[expr] tests if an expression is a full array. Taking for example t=Array[a,{2,2,2,3}]gives the depth-4 list {{{{a[1,1,1,1],a[1,1,1,2],a[1,1,1,3]},..

### Antisymmetric matrix

An antisymmetric matrix is a square matrix thatsatisfies the identity(1)where is the matrix transpose. For example,(2)is antisymmetric. Antisymmetric matrices are commonly called "skew symmetric matrices" by mathematicians.A matrix may be tested to see if it is antisymmetric using the Wolfram Language function AntisymmetricQ[m_List?MatrixQ] := (m === -Transpose[m])In component notation, this becomes(3)Letting , the requirement becomes(4)so an antisymmetric matrix must have zeros on its diagonal. The general antisymmetric matrix is of the form(5)Applying to both sides of the antisymmetry condition gives(6)Any square matrix can be expressed as the sum of symmetric and antisymmetric parts. Write(7)But(8)(9)so(10)which is symmetric, and(11)which is antisymmetric.All antisymmetric matrices of odd dimension are singular. This follows from the fact that(12)So, by the properties of determinants,(13)(14)Therefore,..

### Hermitian matrix

A square matrix is called Hermitian if it is self-adjoint. Therefore, a Hermitian matrix is defined as one for which(1)where denotes the conjugate transpose. This is equivalent to the condition(2)where denotes the complex conjugate. As a result of this definition, the diagonal elements of a Hermitian matrix are real numbers (since ), while other elements may be complex.Examples of Hermitian matrices include(3)and the Pauli matrices(4)(5)(6)Examples of Hermitian matrices include(7)An integer or real matrix is Hermitian iff it is symmetric. A matrix can be tested to see if it is Hermitian using the Wolfram Language function HermitianQ[m_List?MatrixQ] := (m === [email protected]@m)Hermitian matrices have real eigenvalues whose eigenvectors form a unitary basis. For real matrices, Hermitian is the same as symmetric.Any matrix which is not Hermitian can be expressed as the sum of a Hermitian matrix and a antihermitian matrix using(8)Let..

### Antihermitian matrix

A square matrix is antihermitian if it satisfies(1)where is the adjoint. For example, the matrix(2)is an antihermitian matrix. Antihermitian matrices are often called "skew Hermitian matrices" by mathematicians.A matrix can be tested to see if it is antihermitian using the Wolfram Language function AntihermitianQ[m_List?MatrixQ] := (m === -Conjugate[Transpose[m]])The set of antihermitian matrices is a vector space, and the commutator(3)of two antihermitian matrices is antihermitian. Hence, the antihermitian matrices are a Lie algebra, which is related to the Lie group of unitary matrices. In particular, suppose is a path of unitary matrices through , i.e.,(4)for all , where is the adjoint and is the identity matrix. The derivative at of both sides must be equal so(5)That is, the derivative of at the identity must be antihermitian.The matrix exponential map of an antihermitianmatrix is a unitary matrix...

### Tournament matrix

A matrix for a round-robin tournament involving players competing in matches (no ties allowed) having entries(1)This scoring system differs from that used to compute a score sequence of a tournament, in which a win gives one point and a loss zero points. The matrix satisfies(2)where is the transpose of (McCarthy and Benjamin 1996).The tournament matrix for players has zero determinant iff is odd (McCarthy and Benjamin 1996). Furthermore, the dimension of the null space of an -player tournament matrix is(3)(McCarthy and Benjamin 1996).

### Lam's problem

Given a (0,1)-matrix, fill 11 spaces in each row in such a way that all columns also have 11 spaces filled. Furthermore, each pair of rows must have exactly one filled space in the same column. This problem is equivalent to finding a projective plane of order 10. Using a computer program, Lam et al. (1989) showed that no such arrangement exists.Lam's problem is equivalent to finding nine orthogonal Latinsquares of order 10.

### Fundamental theorem of linear algebra

Given an matrix , the fundamental theorem of linear algebra is a collection of results relating various properties of the four fundamental matrix subspaces of . In particular: 1. and where here, denotes the range or column space of , denotes its transpose, and denotes its null space. 2. The null space is orthogonal to the row space . 1. There exist orthonormal bases for both the column space and the row space of . 4. With respect to the orthonormal bases of and , is diagonal. The third item on this list stems from Gram-Schmidt Orthonormalization; the fourth item stems from the singular value decomposition of . Also, while different, the first item is reminiscent of the rank-nullity theorem.The above figure summarizes some of the interactions between the four fundamental matrix subspaces for a real matrix including whether the spaces in question are subspaces of or , which subspaces are orthogonal to one another, and how the matrix maps various vectors..

### Fundamental matrix subspaces

Given a real matrix , there are four associated vector subspaces which are known colloquially as its fundamental subspaces, namely the column spaces and the null spaces of the matrices and its transpose . These four subspaces are important for a number of reasons, one of which is the crucial role they play in the so-called fundamental theorem of linear algebra.The above figure summarizes some of the interactions between the four fundamental matrix subspaces for a real matrix including whether the spaces in question are subspaces of or , which subspaces are orthogonal to one another, and how the matrix maps various vectors relative to the subspace in which lies.In the event that , all four of the fundamental matrix subspaces are lines in . In this case, one can write for some vectors , whereby the directions of the four lines correspond to , , , and . An elementary fact from linear algebra is that these directions are also represented by the eigenvectors..

### Fredholm's theorem

Fredholm's theorem states that, if is an matrix, then the orthogonal complement of the row space of is the null space of , and the orthogonal complement of the column space of is the null space of ,(1)(2)

Hadamard's maximum determinant problem asks to find the largest possible determinant (in absolute value) for any matrix whose elements are taken from some set. Hadamard (1893) proved that the determinant of any complex matrix with entries in the closed unit disk satisfies(1)with equality attained by the Vandermonde matrix of the roots of unity (Faddeev and Sominskii 1965, p. 331; Brenner and Cummings 1972). The first few values for for , 2, ... are 1, 2, , 16, , 216, ..., and the squares of these are 1, 4, 27, 256, 3125, ... (OEIS A000312).A (-1,1)-matrix having a maximal determinant is known as a Hadamard matrix (Brenner and Cummings 1972). The same bound of applies to such matrices, and sharper bounds are known when the size of the matrix is not a multiple of 4. A summary of what is known about such bounds is given by Orrick and Solomon.For a (0,1)-matrix, Hadamard's bound can be improvedto(2)(Faddeev and Sominskii 1965, problem 523; Brenner..

### Majorization

Let and be nonincreasing sequences of real numbers. Then majorizes if, for each , 2, ..., ,with equality if . Note that some caution is needed when consulting the literature, since the direction of the inequality is not consistent from reference to reference. An order-free characterization along the lines of Horn's theorem is also readily available. majorizes iff there exists a doubly stochastic matrix such that . Intuitively, if majorizes , then is more "mixed" than . Horn's theorem relates the eigenvalues of a Hermitian matrix to its diagonal entries using majorization. Given two vectors , then majorizes iff there exists a Hermitian matrix with eigenvalues and diagonal entries .

### Identity matrix

The identity matrix is a the simplest nontrivial diagonalmatrix, defined such that(1)for all vectors . An identity matrix may be denoted , , (the latter being an abbreviation for the German term "Einheitsmatrix"; Courant and Hilbert 1989, p. 7), or occasionally , with a subscript sometimes used to indicate the dimension of the matrix. Identity matrices are sometimes also known as unit matrices (Akivis and Goldberg 1972, p. 71).The identity matrix is given explicitly by(2)for , ..., , where is the Kronecker delta. Written explicitly,(3)The identity matrix is implemented in the Wolfram Language as IdentityMatrix[n]."Square root of identity" matrices can be defined for by solving(4)For , the most general form of the resulting square root matrix is(5)giving(6)as limiting cases."Cube root of identity" matrices can take on even more complicated forms.However, one simple class of such matrices..

### Schur matrix

The square matrix formed by setting , where is a th root of unity. The Schur matrix has a particularly simple determinant given by(1)where is an odd prime and(2)This determinant has been used to prove the quadratic reciprocity theorem (Landau 1958, Vardi 1991). The absolute values of the permanents of the Schur matrix of order are given by 1, 3, 5, 105, 81, 6765, ... (OEIS A003112, Vardi 1991).Denote the Schur matrix with the first row and first column omitted by . Then(3)where perm denoted the permanent (Vardi 1991).

### Hermite normal form

Given a square nonsingular integer matrix , there exists an unimodular matrix and an matrix (known as the Hermite normal form of ) such thatSpecifying a certain set of conditions on the elements of makes it (and therefore ) unique. One possible set of conditions (corresponding to the "columns" version and making lower triangular) is given by 1. for , 2. for all , and 3. and for (Domich et al. 1987).For a complexity analysis of Hermite normal form computation, see Storjohann and Labahn (1996).The Hermite normal form for integer matrices is implemented in the Wolfram Language as HermiteDecomposition[A], which however uses the "rows" convention (thus making upper triangular) and replaces condition (3) with balanced remainders (mod ).

### Conjugate transpose

The conjugate transpose of an matrix is the matrix defined by(1)where denotes the transpose of the matrix and denotes the conjugate matrix. In all common spaces (i.e., separable Hilbert spaces), the conjugate and transpose operations commute, so(2)The symbol (where the "H" stands for "Hermitian") gives official recognition to the fact that for complex matrices, it is almost always the case that the combined operation of taking the transpose and complex conjugate arises in physical or computation contexts and virtually never the transpose in isolation (Strang 1988, pp. 220-221).The conjugate transpose of a matrix is implemented in the Wolfram Language as ConjugateTranspose[A].The conjugate transpose is also known as the adjoint matrix, adjugate matrix, Hermitian adjoint, or Hermitian transpose (Strang 1988, p. 221). Unfortunately, several different notations are in use as summarized in the..

### Payoff matrix

An matrix which gives the possible outcome of a two-person zero-sum game when player A has possible moves and player B moves. The analysis of the matrix in order to determine optimal strategies is the aim of game theory. The so-called "augmented" payoff matrix is defined as follows:

### Singular value

There are two types of singular values, one in the context of elliptic integrals, and the other in linear algebra.For a square matrix , the square roots of the eigenvalues of , where is the conjugate transpose, are called singular values (Marcus and Minc 1992, p. 69). The so-called singular value decomposition of a complex matrix is given by(1)where and are unitary matrices and is a diagonal matrix whose elements are the singular values of (Golub and Van Loan 1996, pp. 70 and 73). Singular values are returned by the command SingularValueList[m].If(2)where is a unitary matrix and is a Hermitian matrix, then the eigenvalues of are the singular values of .For elliptic integrals, a elliptic modulus such that(3)where is a complete elliptic integral of the first kind, and . The elliptic lambda function gives the value of . Abel (quoted in Whittaker and Watson 1990, p. 525) proved that if is an integer, or more generally whenever(4)where..

### Column space

The vector space generated by the columns of a matrix viewed as vectors. The column space of an matrix with real entries is a subspace generated by elements of , hence its dimension is at most . It is equal to the dimension of the row space of and is called the rank of .The matrix is associated with a linear transformation , defined byfor all vectors of , which we suppose written as column vectors. Note that is the product of an and an matrix, hence it is an matrix according to the rules of matrix multiplication. In this framework, the column vectors of are the vectors , where are the elements of the standard basis of . This shows that the column space of is the range of , and explains why the dimension of the latter is equal to the rank of .

### Row space

The vector space generated by the rows of a matrix viewed as vectors. The row space of a matrix with real entries is a subspace generated by elements of , hence its dimension is at most equal to . It is equal to the dimension of the column space of (as will be shown below), and is called the rank of .The row vectors of are the coefficients of the unknowns in the linear equation system(1)where(2)and is the zero vector in . Hence, the solutions span the orthogonal complement to the row space in , and(3)On the other hand, the space of solutions also coincides with the kernel (or null space) of the linear transformation , defined by(4)for all vectors of . And it also true that(5)where denotes the kernel and the image, since the nullity and the rank always add up to the dimension of the domain. It follows that the dimension of the row space is(6)which is equal to the dimension of the column space...

### Normal equation

Given a matrix equationthe normal equation is that which minimizes the sum of the square differences between the left and right sides:It is called a normal equation because is normal to the range of .Here, is a normal matrix.

### Matrix rank

The rank of a matrix or a linear transformation is the dimension of the image of the matrix or the linear transformation, corresponding to the number of linearly independent rows or columns of the matrix, or to the number of nonzero singular values of the map.The rank of a matrix is implemented as MatrixRank[m].

### Transpose

A transpose of a doubly indexed object is the object obtained by replacing all elements with . For a second-tensor rank tensor , the tensor transpose is simply . The matrix transpose, most commonly written , is the matrix obtained by exchanging 's rows and columns, and satisfies the identity(1)Unfortunately, several other notations are commonly used, as summarized in the following table. The notation is used in this work.notationreferencesThis work; Golub and Van Loan (1996), Strang (1988)Arfken (1985, p. 201), Griffiths (1987, p. 223)Ayres (1962, p. 11), Courant and Hilbert (1989, p. 9)The transpose of a matrix or tensor is implemented in the WolframLanguage as Transpose[A].The product of two transposes satisfies(2)(3)(4)(5)(6)where Einstein summation has been used to implicitlysum over repeated indices. Therefore,(7)..

### Singular value decomposition

If a matrix has a matrix of eigenvectors that is not invertible (for example, the matrix has the noninvertible system of eigenvectors ), then does not have an eigen decomposition. However, if is an real matrix with , then can be written using a so-called singular value decomposition of the form(1)Note that there are several conflicting notational conventions in use in the literature. Press et al. (1992) define to be an matrix, as , and as . However, the Wolfram Language defines as an , as , and as . In both systems, and have orthogonal columns so that(2)and(3)(where the two identity matrices may have different dimensions), and has entries only along the diagonal.For a complex matrix , the singular value decomposition is a decomposition into the form(4)where and are unitary matrices, is the conjugate transpose of , and is a diagonal matrix whose elements are the singular values of the original matrix. If is a complex matrix, then there always exists..

### Rotation matrix

When discussing a rotation, there are two possible conventions: rotation of the axes, and rotation of the object relative to fixed axes.In , consider the matrix that rotates a given vector by a counterclockwise angle in a fixed coordinate system. Then(1)so(2)This is the convention used by the WolframLanguage command RotationMatrix[theta].On the other hand, consider the matrix that rotates the coordinate system through a counterclockwise angle . The coordinates of the fixed vector in the rotated coordinate system are now given by a rotation matrix which is the transpose of the fixed-axis matrix and, as can be seen in the above diagram, is equivalent to rotating the vector by a counterclockwise angle of relative to a fixed set of axes, giving(3)This is the convention commonly used in textbooks such as Arfken (1985, p. 195).In , coordinate system rotations of the x-, y-, and z-axes in a counterclockwise direction when looking towards..

### Cholesky decomposition

Given a symmetric positive definite matrix , the Cholesky decomposition is an upper triangular matrix with strictly positive diagonal entries such thatCholesky decomposition is implemented in the WolframLanguage as CholeskyDecomposition[m].

### Qr decomposition

Given a matrix , its -decomposition is a matrix decomposition of the formwhere is an upper triangular matrix and is an orthogonal matrix, i.e., one satisfyingwhere is the transpose of and is the identity matrix. This matrix decomposition can be used to solve linear systems of equations.QR decomposition is implemented in the WolframLanguage as QRDecomposition[m].

### Incidence matrix

The incidence matrix of a graph gives the (0,1)-matrix which has a row for each vertex and column for each edge, and iff vertex is incident upon edge (Skiena 1990, p. 135). However, some authors define the incidence matrix to be the transpose of this, with a column for each vertex and a row for each edge. The physicist Kirchhoff (1847) was the first to define the incidence matrix.The incidence matrix of a graph (using the first definition) can be computed in the Wolfram Language using IncidenceMatrix[g]. Precomputed incidence matrices for a many named graphs are given in the Wolfram Language by GraphData[graph, "IncidenceMatrix"].The incidence matrix of a graph and adjacency matrix of its line graph are related by(1)where is the identity matrix (Skiena 1990, p. 136).For a -D polytope , the incidence matrix is defined by(2)The th row shows which s surround , and the th column shows which s bound . Incidence matrices are also..

### Hessenberg decomposition

A Hessenberg decomposition is a matrix decomposition of a matrix into a unitary matrix and a Hessenberg matrix such thatwhere denotes the conjugate transpose.Hessenberg decomposition is implemented in the WolframLanguage as HessenbergDecomposition[m].Hessenberg decomposition is the first step in Schur decomposition. Hessenberg decomposition on an matrix requires arithmetic operations.

### Hankel matrix

A square matrix with constant skew diagonals. In other words, a Hankel matrix is a matrix in which the th entry depends only on the sum . Such matrices are sometimes known as persymmetric matrices or, in older literature, orthosymmetric matrices.In the Wolfram Language, such a Hankel matrix can be generated for example by HankelMatrix[a, b, c, d, e, e, f, g, h, i], giving(1)An upper triangular Hankel matrix with first column and row can be specified in the Wolfram Language as HankelMatrix[c1, ..., cn], and HankelMatrix[n] where is an integer gives the matrix with first row and column equal to and with every element below the main skew diagonal equal to 0. The first few matrices are given by(2)(3)(4)The elements of this Hankel matrix are given explicitly by(5)The determinant of is given by , where is the floor function, so the first few values are 1, , , 256, 3125, , , 16777216, ... (OEIS A000312)...

The adjacency matrix, sometimes also called the connection matrix, of a simple labeled graph is a matrix with rows and columns labeled by graph vertices, with a 1 or 0 in position according to whether and are adjacent or not. For a simple graph with no self-loops, the adjacency matrix must have 0s on the diagonal. For an undirected graph, the adjacency matrix is symmetric.The illustration above shows adjacency matrices for particular labelings of the claw graph, cycle graph , and complete graph .Since the labels of a graph may be permuted without changing the underlying graph being represented, there are in general multiple possible adjacency matrices for a given graph. In particular, the number of distinct adjacency matrices for a graph with vertex count and automorphism group order is given bywhere is the number or permutations of vertex labels. The illustration above shows the possible adjacency matrices of the cycle graph .The adjacency..

### Matrix

A matrix is a concise and useful way of uniquely representing and working with linear transformations. In particular, every linear transformation can be represented by a matrix, and every matrix corresponds to a unique linear transformation. The matrix, and its close relative the determinant, are extremely important concepts in linear algebra, and were first formulated by Sylvester (1851) and Cayley.In his 1851 paper, Sylvester wrote, "For this purpose we must commence, not with a square, but with an oblong arrangement of terms consisting, suppose, of lines and columns. This will not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants by fixing upon a number , and selecting at will lines and columns, the squares corresponding of th order." Because Sylvester was interested in the determinant formed from the rectangular array of number and not the array itself (Kline..

### Projection matrix

A projection matrix is an square matrix that gives a vector space projection from to a subspace . The columns of are the projections of the standard basis vectors, and is the image of . A square matrix is a projection matrix iff .A projection matrix is orthogonal iff(1)where denotes the adjoint matrix of . A projection matrix is a symmetric matrix iff the vector space projection is orthogonal. In an orthogonal projection, any vector can be written , so(2)An example of a nonsymmetric projection matrix is(3)which projects onto the line .The case of a complex vector space is analogous. A projection matrix is a Hermitian matrix iff the vector space projection satisfies(4)where the inner product is the Hermitian inner product. Projection operators play a role in quantum mechanics and quantum computing.Any vector in is fixed by the projection matrix for any in . Consequently, a projection matrix has norm equal to one, unless ,(5)Let be a -algebra. An element..

### Circulant matrix

An matrix whose rows are composed of cyclically shifted versions of a length- list . For example, the circulant matrix on the list is given by(1)Circulant matrices are very useful in digital image processing, and the circulant matrix is implemented as CirculantMatrix[l, n] in the Mathematica application package Digital Image Processing.Circulant matrices can be implemented in the WolframLanguage as follows. CirculantMatrix[l_List?VectorQ] := NestList[RotateRight, RotateRight[l], Length[l] - 1] CirculantMatrix[l_List?VectorQ, n_Integer] := NestList[RotateRight, RotateRight[Join[Table[0, {n - Length[l]}], l]], n - 1] /; n >= Length[l]where the first input creates a matrix with dimensions equal to the length of and the second pads with zeros to give an matrix. A special type of circulant matrix is defined as(2)where is a binomial coefficient. The determinant of is given by the beautiful formula(3)where , , ..., are the th..

### Schur decomposition

The Schur decomposition of a complex square matrix is a matrix decomposition of the form(1)where is a unitary matrix, is its conjugate transpose, and is an upper triangular matrix which is the sum of a (i.e., a diagonal matrix consisting of eigenvalues of ) and a strictly upper triangular matrix .Schur decomposition is implemented in the Wolfram Language for numeric matrices as SchurDecomposition[m]. The first step in a Schur decomposition is a Hessenberg decomposition. Schur decomposition on an matrix requires arithmetic operations.For example, the Schur decomposition of the matrix(2)is(3)(4)and the eigenvalues of are , , .

Check the price