Let , , and be square matrices with small, and define(1)where is the identity matrix. Then the inverse of is approximately(2)This can be seen by multiplying(3)(4)(5)(6)Note that if we instead let , and look for an inverse of the form , we obtain(7)(8)(9)(10)In order to eliminate the term, we require . However, then , so so there can be no inverse of this form.The exact inverse of can be found as follows.(11)so(12)Using a general matrix inverse identity then gives(13)
The power of a matrix for a nonnegative integer is defined as the matrix product of copies of ,A matrix to the zeroth power is defined to be the identity matrix of the same dimensions, . The matrix inverse is commonly denoted , which should not be interpreted to mean .
The usual number of scalar operations (i.e., the total number of additions and multiplications) required to perform matrix multiplication is(1)(i.e., multiplications and additions). However, Strassen (1969) discovered how to multiply two matrices in(2)scalar operations, where is the logarithm to base 2, which is less than for . For a power of two (), the two parts of (2) can be written(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)so (◇) becomes(13)Two matrices can therefore be multiplied(14)(15)with only(16)scalar operations (as it turns out, seven of them are multiplications and 18 are additions). Define the seven products (involving a total of 10 additions) as(17)(18)(19)(20)(21)(22)(23)Then the matrix product is given using the remaining eight additions as(24)(25)(26)(27)(Strassen 1969, Press et al. 1989).Matrix inversion of a matrix to yield can also be done in fewer operations than expected using the formulas(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(Strassen..
The product of two matrices and is defined as(1)where is summed over for all possible values of and and the notation above uses the Einstein summation convention. The implied summation over repeated indices without the presence of an explicit sum sign is called Einstein summation, and is commonly used in both matrix and tensor analysis. Therefore, in order for matrix multiplication to be defined, the dimensions of the matrices must satisfy(2)where denotes a matrix with rows and columns. Writing out the product explicitly,(3)where(4)(5)(6)(7)(8)(9)(10)(11)(12)Matrix multiplication is associative, as can be seenby taking(13)where Einstein summation is again used. Now, since , , and are scalars, use the associativity of scalar multiplication to write(14)Since this is true for all and , it must be true that(15)That is, matrix multiplication is associative. Equation(13) can therefore be written(16)without ambiguity. Due to associativity,..
Every complex matrix can be broken into a Hermitianpart(i.e., is a Hermitian matrix) and an antihermitian part(i.e., is an antihermitian matrix). Here, denotes the adjoint.
The square root method is an algorithm which solves the matrixequation(1)for , with a symmetric matrix and a given vector. Convert to a triangular matrix such that(2)where is the transpose. Then(3)(4)so(5)giving the equations(6)(7)(8)(9)(10)These give(11)(12)(13)(14)(15)giving from . Now solve for in terms of the s and ,(16)(17)(18)which gives(19)(20)(21)Finally, find from the s and ,(22)(23)(24)giving the desired solution,(25)(26)(27)
The inverse of a square matrix , sometimes called a reciprocal matrix, is a matrix such that(1)where is the identity matrix. Courant and Hilbert (1989, p. 10) use the notation to denote the inverse matrix.A square matrix has an inverse iff the determinant (Lipschutz 1991, p. 45). The so-called invertible matrix theorem is major result in linear algebra which associates the existence of a matrix inverse with a number of other equivalent properties. A matrix possessing an inverse is called nonsingular, or invertible.The matrix inverse of a square matrix may be taken in the Wolfram Language using the function Inverse[m].For a matrix(2)the matrix inverse is(3)(4)For a matrix(5)the matrix inverse is(6)A general matrix can be inverted using methods such as the Gauss-Jordan elimination, Gaussian elimination, or LU decomposition.The inverse of a product of matrices and can be expressed in terms of and . Let(7)Then(8)and(9)Therefore,(10)so(11)where..
The power series that defines the exponential map also defines a map between matrices. In particular,(1)(2)(3)converges for any square matrix , where is the identity matrix. The matrix exponential is implemented in the Wolfram Language as MatrixExp[m].The Kronecker sum satisfies the nice property(4)(Horn and Johnson 1994, p. 208).Matrix exponentials are important in the solution of systems of ordinary differential equations (e.g., Bellman 1970).In some cases, it is a simple matter to express the matrix exponential. For example, when is a diagonal matrix, exponentiation can be performed simply by exponentiating each of the diagonal elements. For example, given a diagonal matrix(5)The matrix exponential is given by(6)Since most matrices are diagonalizable,it is easiest to diagonalize the matrix before exponentiating it.When is a nilpotent matrix, the exponential is given by a matrix polynomial because some power of vanishes...
Two matrices and are said to be equal iff(1)for all . Therefore,(2)while(3)
The matrix direct sum of matrices constructs a block diagonal matrix from a set of square matrices, i.e.,(1)(2)
Denote the sum of two matrices and (of the same dimensions) by . The sum is defined by adding entries with the same indicesover all and . For example,Matrix addition is therefore both commutative andassociative.
The Kronecker sum is the matrix sum defined by(1)where and are square matrices of order and , respectively, is the identity matrix of order , and denotes the Kronecker product.For example, the Kronecker sum of two matrices and is given by(2)The Kronecker sum satisfies the nice property(3)where denotes a matrix exponential.
Every complex matrix can be broken into a Hermitian part(i.e., is a Hermitian matrix) and an antihermitian part(i.e., is an antihermitian matrix). Here, denotes the conjugate transpose.
Let be a vector norm of a vector such thatThen is a matrix norm which is said to be the natural norm induced (or subordinate) to the vector norm . For any natural norm,where is the identity matrix. The natural matrix norms induced by the L1-norm, L2-norm, and L-infty-norm are called the maximum absolute column sum norm, spectral norm, and maximum absolute row sum norm, respectively.
The natural norm induced by the L-infty-normis called the maximum absolute row sum norm and is defined byfor a matrix . This matrix norm is implemented as Norm[m, Infinity].
The natural norm induced by the L1-normis called the maximum absolute column sum norm and is defined byfor a matrix . This matrix norm is implemented as MatrixNorm[m, 1] in the Wolfram Language package MatrixManipulation` .
Given a square complex or real matrix , a matrix norm is a nonnegative number associated with having the properties 1. when and iff , 2. for any scalar , 3. , 4. . Let , ..., be the eigenvalues of , then(1)The matrix -norm is defined for a real number and a matrix by(2)where is a vector norm. The task of computing a matrix -norm is difficult for since it is a nonlinear optimization problem with constraints.Matrix norms are implemented as Norm[m, p], where may be 1, 2, Infinity, or "Frobenius".The maximum absolute column sum norm is defined as(3)The spectral norm , which is the square root of the maximum eigenvalue of (where is the conjugate transpose),(4)is often referred to as "the" matrix norm.The maximum absolute row sum norm isdefined by(5), , and satisfy the inequality(6)..
The Frobenius norm, sometimes also called the Euclidean norm (a term unfortunately also used for the vector -norm), is matrix norm of an matrix defined as the square root of the sum of the absolute squares of its elements,(Golub and van Loan 1996, p. 55).The Frobenius norm can also be considered as a vectornorm.It is also equal to the square root of the matrix trace of , where is the conjugate transpose, i.e.,The Frobenius norm of a matrix is implemented as Norm[m, "Frobenius"] and of a vector as Norm[v, "Frobenius"].
Let be the matrix norm associated with the matrix and be the vector norm associated with a vector . Let the product be defined, then and are said to be compatible if
If is an square matrix and is an eigenvalue of , then the union of the zero vector and the set of all eigenvectors corresponding to eigenvalues is a subspace of known as the eigenspace of .
Let be an matrix with complex (or real) entries and eigenvalues , , ..., , then(1)(2)(3)where is the complex conjugate.
A necessary and sufficient condition for all the eigenvalues of a real matrix to have negative real parts is that the equationhas as a solution where is an matrix and is a positive definite quadratic form.
A left eigenvector is defined as a row vector satisfyingIn many common applications, only right eigenvectors (and not left eigenvectors) need be considered. Hence the unqualified term "eigenvector" can be understood to refer to a right eigenvector.
If is an arbitrary set of positive numbers, then all eigenvalues of the matrix lie on the disk , where
The Gershgorin circle theorem (where "Gershgorin" is sometimes also spelled "Gersgorin" or "Gerschgorin") identifies a region in the complex plane that contains all the eigenvalues of a complex square matrix. For an matrix , define(1)Then each eigenvalue of is in at least one of the disks(2)The theorem can be made stronger as follows. Let be an integer with , and let be the sum of the magnitudes of the largest off-diagonal elements in column . Then each eigenvalue of is either in one of the disks(3)or in one of the regions(4)where is any subset of such that (Brualdi and Mellendorf 1994).
Let the characteristic polynomial of an complex matrix be written in the form(1)(2)Then for any set of positive numbers with and(3)all the eigenvalues (for , ..., ) lie on the closed disk in the complex plane.
A procedure for decomposing an matrix into a product of a lower triangular matrix and an upper triangular matrix ,(1)LU decomposition is implemented in the WolframLanguage as LUDecomposition[m].Written explicitly for a matrix, the decomposition is(2)(3)This gives three types of equations (4)(5)(6)This gives equations for unknowns (the decomposition is not unique), and can be solved using Crout's method. To solve the matrix equation(7)first solve for . This can be done by forward substitution(8)(9)for , ..., . Then solve for . This can be done by back substitution(10)(11)for , ..., 1.
The Jordan matrix decomposition is the decomposition of a square matrix into the form(1)where and are similar matrices, is a matrix of Jordan canonical form, and is the matrix inverse of . In other words, is a similarity transformation of a matrix in Jordan canonical form. The proof that any square matrix can be brought into Jordan canonical form is rather complicated (Turnbull and Aitken 1932; Faddeeva 1958, p. 49; Halmos 1958, p. 112).Jordan decomposition is also associated with the matrix equation and the special case .The Jordan matrix decomposition is implemented in the Wolfram Language as JordanDecomposition[m], and returns a list s, j. Note that the Wolfram Language takes the Jordan block in the Jordan canonical form to have 1s along the superdiagonal instead of the subdiagonal. For example, a Jordan decomposition of(2)is given by(3)(4)..
Eigenvectors are a special set of vectors associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors (Marcus and Minc 1988, p. 144).The determination of the eigenvectors and eigenvalues of a system is extremely important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each eigenvector is paired with a corresponding so-called eigenvalue. Mathematically, two different kinds of eigenvectors need to be distinguished: left eigenvectors and right eigenvectors. However, for many problems in physics and engineering, it is sufficient to consider only right eigenvectors. The term "eigenvector" used without qualification in such applications..
Any square matrix has a canonical form without any need to extend the field of its coefficients. For instance, if the entries of are rational numbers, then so are the entries of its rational canonical form. (The Jordan canonical form may require complex numbers.) There exists a nonsingular matrix such that(1)called the rational canonical form, where is the companion matrix for the monic polynomial(2)The polynomials are called the "invariant factors" of , and satisfy for , ..., (Hartwig 1996). The polynomial is the matrix minimal polynomial and the product is the characteristic polynomial of .The rational canonical form is unique, and shows the extent to which the minimal polynomial characterizes a matrix. For example, there is only one matrix whose matrix minimal polynomial is , which is(3)in rational canonical form.Given a linear transformation , the vector space becomes a -module, that is a module over the ring of polynomials..
Eigenvalues are a special set of scalars associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic roots, characteristic values (Hoffman and Kunze 1971), proper values, or latent roots (Marcus and Minc 1988, p. 144).The determination of the eigenvalues and eigenvectors of a system is extremely important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each eigenvalue is paired with a corresponding so-called eigenvector (or, in general, a corresponding right eigenvector and a corresponding left eigenvector; there is no analogous distinction between left and right for eigenvalues).The decomposition of a square matrix into eigenvalues and eigenvectors is known in this work as eigen..
A matrix, also called a canonical box matrix, having zeros everywhere except along the diagonal and superdiagonal, with each element of the diagonal consisting of a single number , and each element of the superdiagonal consisting of a 1. For example,(Ayres 1962, p. 206).Note that the degenerate case of a matrix is considered a Jordan block even though it lacks a superdiagonal to be filled with 1s (Strang 1988, p. 454).A Jordan canonical form consists of one ormore Jordan blocks.The convention that 1s be along the subdiagonal instead of the superdiagonal is sometimes adopted instead (Faddeeva 1958, p. 50).
Let be a matrix of eigenvectors of a given square matrix and be a diagonal matrix with the corresponding eigenvalues on the diagonal. Then, as long as is a square matrix, can be written as an eigen decompositionwhere is a diagonal matrix. Furthermore, if is symmetric, then the columns of are orthogonal vectors.If is not a square matrix (for example, the space of eigenvectors of is one-dimensional), then cannot have a matrix inverse and does not have an eigen decomposition. However, if is (with ), then can be written using a so-called singular value decomposition.
Given a matrix , a Jordan basis satisfiesandand provides the means by which any complex matrix can be written in Jordan canonical form.
The matrix decomposition of a square matrix into so-called eigenvalues and eigenvectors is an extremely important one. This decomposition generally goes under the name "matrix diagonalization." However, this moniker is less than optimal, since the process being described is really the decomposition of a matrix into a product of three other matrices, only one of which is diagonal, and also because all other standard types of matrix decomposition use the term "decomposition" in their names, e.g., Cholesky decomposition, Hessenberg decomposition, and so on. As a result, the decomposition of a matrix into matrices composed of its eigenvectors and eigenvalues is called eigen decomposition in this work.Assume has nondegenerate eigenvalues and corresponding linearly independent eigenvectors which can be denoted(1)Define the matrices composed of eigenvectors(2)(3)and eigenvalues(4)where is a diagonal matrix...
Matrix diagonalization is the process of taking a square matrix and converting it into a special type of matrix--a so-called diagonal matrix--that shares the same fundamental properties of the underlying matrix. Matrix diagonalization is equivalent to transforming the underlying system of equations into a special set of coordinate axes in which the matrix takes this canonical form. Diagonalizing a matrix is also equivalent to finding the matrix's eigenvalues, which turn out to be precisely the entries of the diagonalized matrix. Similarly, the eigenvectors make up the new set of axes corresponding to the diagonal matrix.The remarkable relationship between a diagonalized matrix, eigenvalues, and eigenvectors follows from the beautiful mathematical identity (the eigen decomposition) that a square matrix can be decomposed into the very special form(1)where is a matrix composed of the eigenvectors of , is the diagonal matrix constructed..
Let be a matrix with the elementary divisors of its characteristic matrix expressed as powers of its irreducible polynomials in the field , and consider an elementary divisor . If , thenwhere is a matrix of the same order as having the element 1 in the lower left-hand corner and zeros everywhere else.Ayres, F. Jr. Schaum's Outline of Theory and Problems of Matrices. New York: Schaum, pp. 205-206, 1962.
A zero matrix is an matrix consisting of all 0s (MacDuffee 1943, p. 27), denoted . Zero matrices are sometimes also known as null matrices (Akivis and Goldberg 1972, p. 71).A zero matrix is the additive identity of the additive group of matrices. The matrix exponential of is given by the identity matrix . An zero matrix can be generated in the Wolfram Language as ConstantArray[0, m, n].
A unit matrix is an integer matrix consisting of all 1s. The unit matrix is often denoted , or if . Square unit matrices have determinant 0 for .An unit matrix can be generated in the Wolfram Language as ConstantArray[1, m, n].Let be a commutative ring with a multiplicative identity. Then the term "unit matrix" is also used to refer to an square matrix with entries in for which there exists an square matrix such thatwith is the identity matrix (MacDuffee 1943, p. 27; Marcus and Minc 1988, p. 69; Marcus and Minc 1992, p. 42).The term "unit matrix" is sometimes also used as a synonym for identitymatrix (Akivis and Goldberg 1972, p. 71).
An interspersion array given bythe first row of which is the Fibonacci numbers.
An integer matrix whose entries satisfy(1)There are special minimal matrices of size .
An array , of positive integers is called an interspersion if 1. The rows of comprise a partition of the positive integers, 2. Every row of is an increasing sequence, 3. Every column of is a (possibly finite) increasing sequence, 4. If and are distinct rows of and if and are any indices for which , then . If an array is an interspersion, then it is a sequence dispersion. If an array is an interspersion, then the sequence given by for some is a fractal sequence. Examples of interspersion are the Stolarsky array and Wythoff array.
A matrix whose entries are all integers. Special cases which arise frequently are those having only as entries (e.g., Hadamard matrix), (0,1)-matrices having only as entries (e.g., adjacency matrix, Frobenius-König theorem, Gale-Ryser theorem, Hadamard's maximum determinant problem, hard square entropy constant, identity matrix, incidence matrix, Lam's problem), and those having as entries (e.g., alternating sign matrix, C-matrix).The zero matrix could be considered a degenerate case of an integer matrix. Integer matrices are sometimes also called "integral matrices," although this usage should be deprecated due to its confusing use of the term "integral" as an adjective.
Proved in 1933. If is an odd prime or and is any positive integer, then there is a Hadamard matrix of orderwhere is any positive integer such that . If is of this form, the matrix can be constructed with a Paley construction. If is divisible by 4 but not of the form (1), the Paley class is undefined. However, Hadamard matrices have been shown to exist for all for .
Hadamard matrices can be constructed using finite field GF() when and is odd. Pick a representation relatively prime to . Then by coloring white (where is the floor function) distinct equally spaced residues mod (, , , ...; , , , ...; etc.) in addition to 0, a Hadamard matrix is obtained if the powers of (mod ) run through . For example,(1)is of this form with and . Since , we are dealing with GF(11), so pick and compute its residues (mod 11), which are(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)Picking the first residues and adding 0 gives: 0, 1, 2, 4, 5, 8, which should then be colored in the matrix obtained by writing out the residues increasing to the left and up along the border (0 through , followed by ), then adding horizontal and vertical coordinates to get the residue to place in each square. (13)To construct , consider the representations . Only the first form can be used, with and . We therefore use GF(19), and color 9 residues plus 0 white.Now consider a more..
The Paley class of a positive integer is defined as the set of all possible quadruples where(1) is an odd prime, and(2)
A finite simple group of Lie-type. The following table summarizes the types of twisted Chevalley groups and their respective orders. In the table, denotes a prime power and the superscript denotes the order of the twisting automorphism.grouporder
Let be a nilpotent, connected, simply connected Lie group, and let be a discrete subgroup of with compact right quotient space. Then is called a nilmanifold.
The set of left cosets of a subgroup of a topological group forms a topological space. Its topology is defined by the quotient topology from . Namely, the open sets in are the images of the open sets in . Moreover, if is closed, then is a T2-space.
A solvable Lie group is a Lie group which is connected and whose Lie algebra is a solvable Lie algebra. That is, the Lie algebra commutator series(1)eventually vanishes, for some . Since nilpotent Lie algebras are also solvable, any nilpotent Lie group is a solvable Lie group.The basic example is the group of invertible upper triangular matrices with positive determinant, e.g.,(2)such that . The Lie algebra of is its tangent space at the identity matrix, which is the vector space of all upper triangular matrices, and it is a solvable Lie algebra. Its Lie algebra commutator series is given by(3)(4)(5)Any real solvable Lie group is diffeomorphic to Euclidean space. For instance, the group of matrices in the example above is diffeomorphic to , via the Lie group exponential map. However, in general, the exponential map in a solvable Lie algebra need not be surjective...
The geometry of the Lie group semidirect product with , where acts on by .
A Lie group is a smooth manifold obeying the group properties and that satisfies the additional condition that the group operations are differentiable.This definition is related to the fifth of Hilbert's problems, which asks if the assumption of differentiability for functions defining a continuous transformation group can be avoided.The simplest examples of Lie groups are one-dimensional. Under addition, the real line is a Lie group. After picking a specific point to be the identity element, the circle is also a Lie group. Another point on the circle at angle from the identity then acts by rotating the circle by the angle In general, a Lie group may have a more complicated group structure, such as the orthogonal group (i.e., the orthogonal matrices), or the general linear group (i.e., the invertible matrices). The Lorentz group is also a Lie group.The tangent space at the identity of a Lie group always has the structure of a Lie algebra, and this..
A Lie group is called semisimple if its Lie algebra is semisimple. For example, the special linear group and special orthogonal group (over or ) are semisimple, whereas triangular groups are not.
A nilpotent Lie group is a Lie group which is connected and whose Lie algebra is a nilpotent Lie algebra . That is, its Lie algebra lower central series(1)eventually vanishes, for some . So a nilpotent Lie group is a special case of a solvable Lie group.The basic example is the group of uppertriangular matrices with 1s on their diagonals, e.g.,(2)which is called the Heisenberg group. Its Lie algebra lower central series is given by(3)(4)(5)Any real nilpotent Lie group is diffeomorphic to Euclidean space. For instance, the group of matrices in the example above is diffeomorphic to , via the Lie group exponential map. In general, the exponential map of a nilpotent Lie algebra is surjective, in contrast to the more general solvable Lie group.
Let be the matrix whose th entry is 1 if divides and 0 otherwise, let be the diagonal matrix , where is the totient function, and let be the matrix whose th entry is the greatest common divisor . Then Le Paige's theorem states thatwhere denotes the transpose (Le Paige 1878, Johnson 2003).As a corollary,(Smith 1876, Johnson 2003). For , 2, ... the first few values are 1, 1, 2, 4, 16, 32, 192, 768, ... (OEIS A001088).
A Lie algebra is nilpotent when its Lie algebra lower central series vanishes for some . Any nilpotent Lie algebra is also solvable. The basic example of a nilpotent Lie algebra is the vector space of strictly upper triangular matrices, such as the Lie algebra of the Heisenberg group.
The roots of a semisimple Lie algebra are the Lie algebra weights occurring in its adjoint representation. The set of roots form the root system, and are completely determined by . It is possible to choose a set of Lie algebra positive roots, every root is either positive or is positive. The Lie algebra simple roots are the positive roots which cannot be written as a sum of positive roots.The simple roots can be considered as a linearly independent finite subset of Euclidean space, and they generate the root lattice. For example, in the special Lie algebra of two by two matrices with zero matrix trace, has a basis given by the matrices(1)The adjoint representation is given bythe brackets(2)(3)so there are two roots of given by and . The Lie algebraic rank of is one, and it has one positive root...
A Lie algebra is solvable when its Lie algebra commutator series, or derived series, vanishes for some . Any nilpotent Lie algebra is solvable. The basic example is the vector space of upper triangular matrices, because every time two such matrices commute, their nonzero entries move further from the diagonal.
A representation of a Lie algebra is a linear transformationwhere is the set of all linear transformations of a vector space . In particular, if , then is the set of square matrices. The map is required to be a map of Lie algebras so thatfor all . Note that the expression only makes sense as a matrix product in a representation. For example, if and are antisymmetric matrices, then is skew-symmetric, but may not be antisymmetric.The possible irreducible representations of complex Lie algebras are determined by the classification of the semisimple Lie algebras. Any irreducible representation of a complex Lie algebra is the tensor product , where is an irreducible representation of the quotient of the algebra and its Lie algebra radical, and is a one-dimensional representation.A Lie algebra may be associated with a Lie group, in which case it reflects the local structure of the Lie group. Whenever a Lie group has a group representation on , its tangent..
Let be a finite-dimensional Lie algebra over some field . A subalgebra of is called a Cartan subalgebra if it is nilpotent and equal to its normalizer, which is the set of those elements such that .It follows from the definition that if is nilpotent, then itself is a Cartan subalgebra of . On the other hand, let be the Lie algebra of all endomorphisms of (for some natural number ), with . Then the set of all endomorphisms of of the form is a Cartan subalgebra of .It can be proved that: 1. If is infinite, then has Cartan subalgebras. 2. If the characteristic of is equal to , then all Cartan subalgebras of have the same dimension. 3. If is algebraically closed and its characteristic is equal to 0, then, given two Cartan subalgebras and of , there is an automorphism of such that . 4. If is semisimple and is an infinite field whose characteristic is equal to 0, then all Cartan subalgebras of are Abelian. Every Cartan subalgebra of a Lie algebra is a maximal nilpotent subalgebra..
A Lie algebra is said to be simple if it is not Abelian and has no nonzero proper ideals.Over an algebraically closed field of field characteristic 0, every simple Lie algebra is constructed from a simple reduced root system by the Chevalley construction, as described by Humphreys (1977).Over an algebraically closed field of field characteristic , every simple Lie algebra is constructed from a simple reduced root system (as in the characteristic 0 case) or is a Cartan algebra.There also exist simple Lie algebras over algebraically closed fields of field characteristic 2, 3, and 5 that are not constructed from a simple reduced root system and are not Cartan algebras.
A Cartan matrix is a square integer matrix who elements satisfy the following conditions. 1. is an integer, one of . 2. the diagonal entries are all 2. 3. off of the diagonal. 4. iff . 5. There exists a diagonal matrix such that gives a symmetric and positive definite quadratic form. A Cartan matrix can be associated to a semisimple Lie algebra . It is a square matrix, where is the Lie algebra rank of . The Lie algebra simple roots are the basis vectors, and is determined by their inner product, using the Killing form.(1)In fact, it is more a table of values than a matrix. By reordering the basis vectors, one gets another Cartan matrix, but it is considered equivalent to the original Cartan matrix.The Lie algebra can be reconstructed, up to isomorphism, by the generators which satisfy the Chevalley-Serre relations. In fact,(2)where are the Lie subalgebras generated by the generators of the same letter.For example,(3)is a Cartan matrix. The Lie algebra..
A Lie algebra over a field of characteristic zero is called semisimple if its Killing form is nondegenerate. The following properties can be proved equivalent for a finite-dimensional algebra over a field of characteristic 0: 1. is semisimple. 2. has no nonzero Abelian ideal. 3. has zero ideal radical (the radical is the biggest solvable ideal). 4. Every representation of is fully reducible, i.e., is a sum of irreducible representations. 5. is a (finite) direct product of simple Lie algebras (a Lie algebra is called simple if it is not Abelian and has no nonzero ideal ).
Let be a Euclidean space, be the dot product, and denote the reflection in the hyperplane bywhereThen a subset of the Euclidean space is called a root system in if: 1. is finite, spans , and does not contain 0, 2. If , the reflection leaves invariant, and 3. If , then . The Lie algebra roots of a semisimple Lie algebra are a root system, in a real subspace of the dual vector space to the Cartan subalgebra. In this case, the reflections generate the Weyl group, which is the symmetry group of the root system.
The lower central series of a Lie algebra is the sequence of subalgebras recursively defined by(1)with . The sequence of subspaces is always decreasing with respect to inclusion or dimension, and becomes stable when is finite dimensional. The notation means the linear span of elements of the form , where and .When the lower central series ends in the zero subspace, the Lie algebra is called nilpotent. For example, consider the Lie algebra of strictly upper triangular matrices, then(2)(3)(4)(5)and . By definition, , where is the term in the Lie algebra commutator series, as can be seen by the example above.In contrast to the nilpotent Lie algebras, the semisimple Lie algebras have a constant lower central series. Others are in between, e.g.,(6)which is semisimple, because the matrix trace satisfies(7)Here, is a general linear Lie algebra and is the special linear Lie algebra...
The commutator series of a Lie algebra , sometimes called the derived series, is the sequence of subalgebras recursively defined by(1)with . The sequence of subspaces is always decreasing with respect to inclusion or dimension, and becomes stable when is finite dimensional. The notation means the linear span of elements of the form , where and .When the commutator series ends in the zero subspace, the Lie algebra is called solvable. For example, consider the Lie algebra of strictly upper triangular matrices, then(2)(3)(4)and . By definition, where is the term in the Lie algebra lower central series, as can be seen by the example above.In contrast to the solvable Lie algebras, the semisimple Lie algebras have a constant commutator series. Others are in between, e.g.,(5)which is semisimple, because the matrix trace satisfies(6)Here, is a general linear Lie algebra and is the special linear Lie algebra...
A Lie algebra is a vector space with a Lie bracket , satisfying the Jacobi identity. Hence any element gives a linear transformation given by(1)which is called the adjoint representation of . It is a Lie algebra representation because of the Jacobi identity,(2)(3)(4)A Lie algebra representation is given by matrices. The simplest Lie algebra is the set of matrices. Consider the adjoint representation of , which has four dimensions and so will be a four-dimensional representation. The matrices(5)(6)(7)(8)give a basis for . Using this basis, the adjoint representation is described by the following matrices:(9)(10)(11)(12)
A reduced root system is a root system satisfying the additional property that, if , then the only multiples of in are .
A nonassociative algebra obeyed by objects such as the Lie bracket and Poisson bracket. Elements , , and of a Lie algebra satisfy(1)(2)and(3)(the Jacobi identity). The relation implies(4)For characteristic not equal to two, these two relations are equivalent.The binary operation of a Lie algebra is the bracket(5)An associative algebra with associative product can be made into a Lie algebra by the Lie product(6)Every Lie algebra is isomorphic to a subalgebra of some where the associative algebra may be taken to be the linear operators over a vector space (the Poincaré-Birkhoff-Witt theorem; Jacobson 1979, pp. 159-160). If is finite dimensional, then can be taken to be finite dimensional (Ado's theorem for characteristic ; Iwasawa's theorem for characteristic ).The classification of finite dimensional simple Lie algebras over an algebraically closed field of characteristic 0 can be accomplished by (1) determining matrices..
A single axiom that is satisfied only by NAND or NOR must be of the form "something equals ," since otherwise constant functions would satisfy the equation. With up to six NANDs and two variables, none of the possible axiom systems of this kind work even up to 3-value operators. But with 6 NANDS and 3 variables, 296 of the possible axiom systems work up to 3-value operators, and 100 work up to 4-value operators (Wolfram 2002, p. 809).Of the 25 of these that are not trivially equivalent, it then turns out that only the Wolfram axiomand the axiomwhere denotes the NAND operator, are equivalent to the axioms of Boolean algebra (Wolfram 2002, pp. 808-811 and 1174). These candidate axioms were identified by S. Wolfram in 2000, who also proved that there were no smaller candidates...
Conditions arising in the study of the Robbins axiom and its connection with Boolean algebra. Winkler studied Boolean conditions (such as idempotence or existence of a zero) which would make a Robbins algebra become a Boolean algebra. Winkler showed that each of the conditionswhere denotes OR and denotes NOT, known as the first and second Winkler conditions, suffices. A computer proof demonstrated that every Robbins algebra satisfies the second Winkler condition, from which it follows immediately that all Robbins algebras are Boolean.
Consider a Boolean algebra of subsets generated by a set , which is the set of subsets of that can be obtained by means of a finite number of the set operations union, intersection, and complementation. Then each of the elements of is called a Boolean function generated by (Comtet 1974, p. 185). Each Boolean function has a unique representation (up to order) as a union of complete products. It follows that there are inequivalent Boolean functions for a set with cardinality (Comtet 1974, p. 187).In 1938, Shannon proved that a two-valued Boolean algebra (whose members are most commonly denoted 0 and 1, or false and true) can describe the operation of two-valued electrical switching circuits. The following table gives the truth table for the possible Boolean functions of two binary variables.00000000000100001111100011001111010101010011111111010000111110001100111101010101The names and symbols for these functions are given..
Given a vector space , its projectivization , sometimes written , is the set of equivalence classes for any in . For example, complex projective space has homogeneous coordinates , with not all .The projectivization is a manifold with one less dimension than . In fact, it is covered by the affine coordinate charts,
The Kähler potential is a real-valued function on a Kähler manifold for which the Kähler form can be written as . Here, the operators(1)and(2)are called the del and del bar operator, respectively.For example, in , the function is a Kähler potential for the standard Kähler form, because(3)(4)(5)(6)
A completely positive matrix is a real square matrix that can be factorized aswhere stands for the transpose of and is any (not necessarily square) matrix with nonnegative elements. The least possible number of columns () of is called the factorization index (or the cp-rank) of . The study of complete positivity originated in inequality theory and quadratic forms (Diananda 1962, Hall and Newman 1963).There are two basic problems concerning complete positivity. 1. When is a given real matrix completely positive? 2. How can the cp-rank of be calculated? These two fundamental problems still remains open. The applications of completely positive matrices can be found in block designs (Hall and Newman 1963) and economic modelling (Gray and Wilson 1980).
Let be the number of (0,1)-matrices with no adjacent 1s (in either columns or rows). For , 2, ..., is given by 2, 7, 63, 1234, ... (OEIS A006506).The hard square entropy constant is defined by(OEIS A085850). It is not known if this constanthas an exact representation.The quantity arises in statistical physics (Baxter et al. 1980, Pearce and Seaton 1988), and is known as the entropy per site of hard squares. A related constant known as the hard hexagon entropy constant can also be defined.
On July 10, 2003, Eric Weisstein computed the numbers of (0,1)-matrices all of whose eigenvalues are real and positive, obtaining counts for , 2, ... of 1, 3, 25, 543, 29281, .... Based on agreement with OEIS A003024, Weisstein then conjectured that is equal to the number of labeled acyclic digraphs on vertices.This result was subsequently proved by McKay et al. (2003, 2004).
The characteristic polynomial is the polynomial left-hand side of the characteristicequation(1)where is a square matrix and is the identity matrix of identical dimension. Samuelson's formula allows the characteristic polynomial to be computed recursively without divisions. The characteristic polynomial of a matrix may be computed in the Wolfram Language as CharacteristicPolynomial[m, x].The characteristic polynomial of a matrix(2)can be rewritten in the particularly nice form(3)where is the matrix trace of and is its determinant.Similarly, the characteristic polynomial of a matrix is(4)where Einstein summation has been used, whichcan also be written explicitly in terms of traces as(5)In general, the characteristic polynomial has the form(6)(7)where is the matrix trace of the matrix , , and is the sum of the -rowed diagonal minors of the matrix (Jacobson 1974, p. 109).Le Verrier's algorithm for computing the characteristic..
Any square matrix can be written as a sum(1)where(2)is a symmetric matrix known as the symmetric part of and(3)is an antisymmetric matrix known as the antisymmetric part of . Here, is the transpose.The symmetric part of a tensor is denoted using parenthesesas(4)(5)Symbols for the symmetric and antisymmetric partsof tensors can be combined, for example(6)(Wald 1984, p. 26).
Any square matrix can be written as a sum(1)where(2)is a symmetric matrix known as the symmetric part of and(3)is an antisymmetric matrix known as the antisymmetric part of . Here, is the transpose.Any rank-2 tensor can be written as a sum of symmetricand antisymmetric parts as(4)The antisymmetric part of a tensor is sometimes denoted using the special notation(5)For a general rank- tensor,(6)where is the permutation symbol. Symbols for the symmetric and antisymmetric parts of tensors can be combined, for example(7)(Wald 1984, p. 26).
Given a system of two ordinary differential equations(1)(2)let and denote fixed points with , so(3)(4)Then expand about so(5)(6)To first-order, this gives(7)where the matrix, or its generalization to higher dimension, is called the stability matrix. Analysis of the eigenvalues (and eigenvectors) of the stability matrix characterizes the type of fixed point.
A Boolean algebra is a mathematical structure that is similar to a Boolean ring, but that is defined using the meet and join operators instead of the usual addition and multiplication operators. Explicitly, a Boolean algebra is the partial order on subsets defined by inclusion (Skiena 1990, p. 207), i.e., the Boolean algebra of a set is the set of subsets of that can be obtained by means of a finite number of the set operations union (OR), intersection (AND), and complementation (NOT) (Comtet 1974, p. 185). A Boolean algebra also forms a lattice (Skiena 1990, p. 170), and each of the elements of is called a Boolean function. There are Boolean functions in a Boolean algebra of order (Comtet 1974, p. 186).In 1938, Shannon proved that a two-valued Boolean algebra (whose members are most commonly denoted 0 and 1, or false and true) can describe the operation of two-valued electrical switching circuits. In modern times, Boolean..
The conjecture that the equations for a Robbins algebra, commutativity, associativity,and the Robbins axiomwhere denotes NOT and denotes OR, imply those for a Boolean algebra. The conjecture was finally proven using a computer (McCune 1997).
Building on work of Huntington (1933ab), Robbins conjectured that the equations for a Robbins algebra, commutativity, associativity, and the Robbins axiomwhere denotes NOT and denotes OR, imply those for a Boolean algebra. The conjecture was finally proven using a computer (McCune 1997).
Let and be any functions of a set of variables . Then the expression(1)is called a Poisson bracket (Poisson 1809; Whittaker 1944, p. 299). Plummer (1960, p. 136) uses the alternate notation .The Poisson brackets are anticommutative,(2)(Plummer 1960, p. 136).Let be independent functions of the variables . Then the Poisson bracket is connected with the Lagrange bracket by(3)where is the Kronecker delta. But this is precisely the condition that the determinants formed from them are reciprocal (Whittaker 1944, p. 300; Plummer 1960, p. 137).If and are physically measurable quantities (observables) such as position, momentum, angular momentum, or energy, then they are represented as non-commuting quantum mechanical operators in accordance with Heisenberg's formulation of quantum mechanics. In this case,(4)where is the commutator and is the Poisson bracket. Thus, for example, for a single particle..
A rational function can be rewritten using what is known as partial fraction decomposition. This procedure often allows integration to be performed on each term separately by inspection. For each factor of the form , introduce terms(1)For each factor of the form , introduce terms(2)Then write(3)and solve for the s and s.Partial fraction decomposition is implemented as Apart.
The number of nonassociative -products with elements preceding the rightmost left parameter is(1)(2)where is a binomial coefficient. The number of -products in a nonassociative algebra is(3)where is a Catalan number, 1, 1, 2, 5, 14, 42, 132, ... (OEIS A000108).
The nesting of two or more functions to form a single new function is known as composition. The composition of two functions and is denoted , where is a function whose domain includes the range of . The notation(1)is sometimes used to explicitly indicate the variable.Composition is associative, so that(2)If the functions is continuous at and is continuous at , then is also continuous at .A function which is the composition of two other functions, say and , is sometimes said to be a composite function.Faà di Bruno's formula gives an explicit formula for the th derivative of the composition .A combinatorial composition is defined as an ordered arrangement of nonnegative integers which sum to (Skiena 1990, p. 60). It is therefore a partition in which order is significant. For example, there are eight compositions of 4,(3)(4)(5)(6)(7)(8)(9)(10)A positive integer has compositions.The number of compositions of into parts (where..
Let , , ... be operators. Then the commutator of and is defined as(1)Let , , ... be constants, then identities include(2)(3)(4)(5)(6)(7)(8)Let and be tensors. Then(9)There is a related notion of commutator in the theory of groups. The commutator of two group elements and is , and two elements and are said to commute when their commutator is the identity element. When the group is a Lie group, the Lie bracket in its Lie algebra is an infinitesimal version of the group commutator. For instance, let and be square matrices, and let and be paths in the Lie group of nonsingular matrices which satisfy(10)(11)(12)then(13)
An operator defined on a set which takes two elements from as inputs and returns a single element of . Binary operators are called compositions by Rosenfeld (1968). Sets possessing a binary multiplication operation include the group, groupoid, monoid, quasigroup, and semigroup. Sets possessing both a binary multiplication and a binary addition operation include the division algebra, field, ring, ringoid, semiring, and unit ring.
A binary operation is an operation that applies to two quantities or expressions and .A binary operation on a nonempty set is a map such that 1. is defined for every pair of elements in , and 2. uniquely associates each pair of elements in to some element of . Examples of binary operation on from to include addition (), subtraction (), multiplication ) and division ().
Direct sums are defined for a number of different sorts of mathematical objects, including subspaces, matrices, modules, and groups.The matrix direct sum is defined by(1)(2)(Ayres 1962, pp. 13-14).The direct sum of two subspaces and is the sum of subspaces in which and have only the zero vector in common (Rosen 2000, p. 357).The significant property of the direct sum is that it is the coproduct in the category of modules (i.e., a module direct sum). This general definition gives as a consequence the definition of the direct sum of Abelian groups and (since they are -modules, i.e., modules over the integers) and the direct sum of vector spaces (since they are modules over a field). Note that the direct sum of Abelian groups is the same as the group direct product, but that the term direct sum is not used for groups which are non-Abelian.Note that direct products and direct sums differ for infinite indices. An element of the direct sum is..
The Cartesian product of two sets and (also called the product set, set direct product, or cross product) is defined to be the set of all points where and . It is denoted , and is called the Cartesian product since it originated in Descartes' formulation of analytic geometry. In the Cartesian view, points in the plane are specified by their vertical and horizontal coordinates, with points on a line being specified by just one coordinate. The main examples of direct products are Euclidean three-space (, where are the real numbers), and the plane ().The graph product is sometimes called the Cartesianproduct (Vizing 1963, Clark and Suen 2000).
A formula for the permanent of a matrixwhere the sum is over all subsets of , and is the number of elements in . The formula can be optimized by picking the subsets so that only a single element is changed at a time (which is precisely a Gray code), reducing the number of additions from to .It turns out that the number of disks moved after the th step in the tower of Hanoi is the same as the element which needs to be added or deleted in the th addend of the Ryser formula (Gardner 1988, Vardi 1991, p. 111).
The symmetric successive overrelaxation (SSOR) method combines two successive overrelaxation method (SOR) sweeps together in such a way that the resulting iteration matrix is similar to a symmetric matrix it the case that the coefficient matrix of the linear system is symmetric. The SSOR is a forward SOR sweep followed by a backward SOR sweep in which the unknowns are updated in the reverse order. The similarity of the SSOR iteration matrix to a symmetric matrix permits the application of SSOR as a preconditioner for other iterative schemes for symmetric matrices. This is the primary motivation for SSOR, since the convergence rate is usually slower than the convergence rate for SOR with optimal .
The successive overrelaxation method (SOR) is a method of solving a linear system of equations derived by extrapolating the Gauss-Seidel method. This extrapolation takes the form of a weighted average between the previous iterate and the computed Gauss-Seidel iterate successively for each component,where denotes a Gauss-Seidel iterate and is the extrapolation factor. The idea is to choose a value for that will accelerate the rate of convergence of the iterates to the solution.In matrix terms, the SOR algorithm can be written aswhere the matrices , , and represent the diagonal, strictly lower-triangular, and strictly upper-triangular parts of , respectively.If , the SOR method simplifies to the Gauss-Seidel method. A theorem due to Kahan (1958) shows that SOR fails to converge if is outside the interval .In general, it is not possible to compute in advance the value of that will maximize the rate of convergence of SOR. Frequently, some heuristic..
Stationary iterative methods are methods for solving a linearsystem of equationswhere is a given matrix and is a given vector. Stationary iterative methods can be expressed in the simple formwhere neither nor depends upon the iteration count . The four main stationary methods are the Jacobi method, Gauss-Seidel method, successive overrelaxation method (SOR), and symmetric successive overrelaxation method (SSOR).The Jacobi method is based on solving for every variable locally with respect to the other variables; one iteration corresponds to solving for every variable once. It is easy to understand and implement, but convergence is slow.The Gauss-Seidel method is similar to the Jacobi method except that it uses updated values as soon as they are available. It generally converges faster than the Jacobi method, although still relatively slowly.The successive overrelaxation method can be derived from the Gauss-Seidel method by introducing..
A standard basis, also called a natural basis, is a special orthonormal vector basis in which each basis vector has a single nonzero entry with value 1. In -dimensional Euclidean space , the vectors are usually denoted (or ) with , ..., , where is the dimension of the vector space that is spanned by this basis according to(1)For example, in the Euclidean plane , the standard basis is(2)(3)Similarly, for Euclidean 3-space , the standard basis is(4)(5)(6)
A linear transformation between two vector spaces and is a map such that the following hold: 1. for any vectors and in , and 2. for any scalar . A linear transformation may or may not be injective or surjective. When and have the same dimension, it is possible for to be invertible, meaning there exists a such that . It is always the case that . Also, a linear transformation always maps lines to lines (or to zero).The main example of a linear transformation is given by matrix multiplication. Given an matrix , define , where is written as a column vector (with coordinates). For example, consider(1)then is a linear transformation from to , defined by(2)When and are finite dimensional, a general linear transformation can be written as a matrix multiplication only after specifying a vector basis for and . When and have an inner product, and their vector bases, and , are orthonormal, it is easy to write the corresponding matrix . In particular, . Note that when using..
A linear system of equations is a set of linear equations in variables (sometimes called "unknowns"). Linear systems can be represented in matrix form as the matrix equation(1)where is the matrix of coefficients, is the column vector of variables, and is the column vector of solutions.If , then the system is (in general) overdetermined and there is no solution.If and the matrix is nonsingular, then the system has a unique solution in the variables. In particular, as shown by Cramer's rule, there is a unique solution if has a matrix inverse . In this case,(2)If , then the solution is simply . If has no matrix inverse, then the solution set is the translate of a subspace of dimension less than or the empty set.If two equations are multiples of each other, solutions are ofthe form(3)for a real number. More generally, if , then the system is underdetermined. In this case, elementary row and column operations can be used to solve the system as far..
The Jacobi method is a method of solving a matrix equation on a matrix that has no zeros along its main diagonal (Bronshtein and Semendyayev 1997, p. 892). Each diagonal element is solved for, and an approximate value plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization.The Jacobi method is easily derived by examining each of the equations in the linear system of equations in isolation. If, in the th equation(1)solve for the value of while assuming the other entries of remain fixed. This gives(2)which is the Jacobi method.In this method, the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. The definition of the Jacobi method can be expressed with matrices as(3)where the matrices , , and represent thediagonal, strictly lower triangular, and strictly upper triangular..
Given a set of linear equations(1)consider the determinant(2)Now multiply by , and use the property of determinants that multiplication by a constant is equivalent to multiplication of each entry in a single column by that constant, so(3)Another property of determinants enables us to add a constant times any column to any column and obtain the same determinant, so add times column 2 and times column 3 to column 1,(4)If , then (4) reduces to , so the system has nondegenerate solutions (i.e., solutions other than (0, 0, 0)) only if (in which case there is a family of solutions). If and , the system has no unique solution. If instead and , then solutions are given by(5)and similarly for(6)(7)This procedure can be generalized to a set of equations so, given a system of linear equations(8)let(9)If , then nondegenerate solutions exist only if . If and , the system has no unique solution. Otherwise, compute(10)Then for . In the three-dimensional case, the..
A change of coordinates matrix, also called a transition matrix, specifies the transformation from one vector basis to another under a change of basis. For example, if and are two vector bases in , and let be the coordinates of a vector in basis and its coordinates in basis .Write the basis vectors and for in coordinates relative to basis as(1)(2)This means that(3)(4)giving the change of coordinates matrix(5)which specifies the change of coordinates of a vector under the change of basis from to via(6)
A basis vector in an -dimensional vector space is one of any chosen set of vectors in the space forming a vector basis, i.e., having the property that every vector in the space can be written uniquely as a linear combination of them.For example, in the Euclidean plane, the unit vectors and form a vector basis since for any point ,so for this basis, and are basis vectors
The Wronskian of a set of functions , , ... is defined byIf the Wronskian is nonzero in some region, the functions are linearly independent. If over some range, the functions are linearly dependent somewhere in the range.
A vector basis of a vector space is defined as a subset of vectors in that are linearly independent and span . Consequently, if is a list of vectors in , then these vectors form a vector basis if and only if every can be uniquely written as(1)where , ..., are elements of the base field.When the base field is the reals so that for , the resulting basis vectors are -tuples of reals that span -dimensional Euclidean space . Other possible base fields include the complexes , as well as various fields of positive characteristic considered in algebra, number theory, and algebraic geometry.A vector space has many different vector bases, but there are always the same number of basis vectors in each of them. The number of basis vectors in is called the dimension of . Every spanning list in a vector space can be reduced to a basis of the vector space.The simplest example of a vector basis is the standard basis in Euclidean space , in which the basis vectors lie along each coordinate..
A linear transformation(1)(2)(3)is said to be an orthogonal transformationif it satisfies the orthogonality condition(4)where Einstein summation has been used and is the Kronecker delta.
An orthogonal transformation is a linear transformation which preserves a symmetric inner product. In particular, an orthogonal transformation (technically, an orthonormal transformation) preserves lengths of vectors and angles between vectors,(1)In addition, an orthogonal transformation is either a rigid rotation or an improper rotation (a rotation followed by a flip). (Flipping and then rotating can be realized by first rotating in the reverse direction and then flipping.) Orthogonal transformations correspond to and may be represented using orthogonal matrices.The set of orthonormal transformations forms the orthogonal group, and an orthonormal transformation can be realized by an orthogonal matrix.Any linear transformation in three dimensions(2)(3)(4)satisfying the orthogonality condition(5)where Einstein summation has been used and is the Kronecker delta, is an orthogonal transformation. If is an orthogonal..
The group of functions from an object to itself which preserve the structure of the object, denoted . The automorphism group of a group preserves the multiplication table, the automorphism group of a graph the incidence matrix, and that of a field the addition and multiplication tables.The automorphism group of a graph can be computed in the Wolfram Language using GraphAutomorphismGroup[g].
If is a perfect group, then the group center of the quotient group , where is the group center of , is the trivial group.
A quasigroup with an identity element such that and for any in the quasigroup. All groups are loops.In general, loops are considered to have very little in the way of algebraic structure and it is for that reason that many authors limit their investigation to loops which satisfy various other structural conditions. Common examples of such notions are the left- and right-Bol loop, the Moufang loop (which is both a left-Bol loop and a right-Bol loop simultaneously), and the generalized Bol loop.The above definition of loop is purely algebraic and shouldn't be confused with other notions of loop, such as a closed curves, a multi-component knot or hitch, a graph loop, etc.
A group that coincides with its commutator subgroup.If is a non-Abelian group, its commutator subgroup is a normal subgroup other than the trivial group. It follows that if is simple, it must be perfect. The converse, however, is not necessarily true. For example, the special linear group is always perfect if (Rose 1994, p. 61), but if is not a power of 2 (i.e., the field characteristic of the finite field is not 2), it is not simple, since its group center contains two elements: the identity matrix and its additive inverse , which are different because .
A particular type of automorphism group which exists only for groups. For a group , the outer automorphism group is the quotient group , which is the automorphism group of modulo its inner automorphism group.
An additive group is a group where the operation is called addition and is denoted . In an additive group, the identity element is called zero, and the inverse of the element is denoted (minus ). The symbols and terminology are borrowed from the additive groups of numbers: the ring of integers , the field of rational numbers , the field of real numbers , and the field of complex numbers are all additive groups.In general, every ring and every field is an additive group. An important class of examples is given by the polynomial rings with coefficients in a ring . In the additive group of the sum is performed by adding the coefficients of equal terms,(1)Modules, abstractvector spaces, and algebras are all additive groups.The sum of vectors of the vector space is defined componentwise,(2)and so is the sum of matrices with entries in a ring ,(3)which is part of the -module structure of the set of matrices .Any quotient group of an Abelian additive group is again..
Let be a subgroup of a group . The similarity transformation of by a fixed element in not in always gives a subgroup. Iffor every element in , then is said to be a normal subgroup of , written (Arfken 1985, p. 242; Scott 1987, p. 25). Normal subgroups are also known as invariant subgroups or self-conjugate subgroup (Arfken 1985, p. 242).All subgroups of Abelian groups are normal (Arfken1985, p. 242).
Let a group have a group presentationso that , where is the free group with basis and is the normal subgroup generated by the . If is a group with and if for all , then there is a surjective homomorphism with for all .
If is a group, then the extensions of of order with , where is the Frattini subgroup, are called Frattini extensions.
A square matrix is said to be unipotent if , where is an identity matrix is a nilpotent matrix (defined by the property that is the zero matrix for some positive integer matrix power . The corresponding identity, for some integer allows this definition to be generalized to other types of algebraic systems.An example of a unipotent matrix is a square matrix whose entries below the diagonal are zero and its entries on the diagonal are one. An explicit example of a unipotent matrix is given byOne feature of a unipotent matrix is that its matrix powers have entries which grow like a polynomial in .A semisimple element of a group is unipotent if is a p-group, where is the generalized fitting subgroup.
Landau's function is the maximum order of an element in the symmetric group . The value is given by the largest least common multiple of all partitions of the numbers 1 to . The first few values for , 2, ... are 1, 2, 3, 4, 6, 6, 12, 15, 20, 30, ... (OEIS A000793), and have been computed up to by Grantham (1995).Landau showed thatLocal maxima of this function occur at 2, 3, 5, 7, 9, 10, 12, 17, 19, 30, 36, 40,... (OEIS A103635).Let be the greatest prime factor of . Then the first few terms for , 3, ... are 2, 3, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 11, ... (OEIS A129759). Nicolas (1969) showed that . Massias et al. (1988, 1989) showed that for all , , and Grantham (1995) showed that for all , the constant 2.86 may be replaced by 1.328.
The fourth group isomorphism theorem, also called the lattice group isomorphism theorem, lets be a group and let , where indicates that is a normal subgroup of . Then there is a bijection from the set of subgroups of that contain onto the set of subgroups of . In particular, every subgroup is of the form for some subgroup of containing (namely, its preimage in under the natural projection homomorphism from to .) This bijection has the following properties: for all with and , 1. iff 2. If , then 3. , where denotes the subgroup generated by and 4. 5. iff .
is said to be tightly embedded if is odd for all , where is the normalizer of in .
The most general form of Lagrange's group theorem, also known as Lagrange's lemma, states that for a group , a subgroup of , and a subgroup of , , where the products are taken as cardinalities (thus the theorem holds even for infinite groups) and denotes the subgroup index for the subgroup of . A frequently stated corollary (which follows from taking , where is the identity element) is that the order of is equal to the product of the order of and the subgroup index of .The corollary is easily proven in the case of being a finite group, in which case the left cosets of form a partition of , thus giving the order of as the number of blocks in the partition (which is ) multiplied by the number of elements in each partition (which is just the order of ).For a finite group , this corollary gives that the order of must divide the order of . Then, because the order of an element of is the order of the cyclic subgroup generated by , we must have that the order of any element of divides..
Let be a planar Abelian difference set and be any divisor of . Then is a numerical multiplier of , where a multiplier is defined as an automorphism of a group which takes to a translation of itself for some . If is of the form for relatively prime to the order of , then is called a numerical multiplier.
Let be a group having normal subgroups and with . Then andwhere indicates that is a normal subgroup of and indicates that and are isomorphic groups.
The first group isomorphism theorem, also known as the fundamental homomorphism theorem, states that if is a group homomorphism, then and , where indicates that is a normal subgroup of , denotes the group kernel, and indicates that and are isomorphic groups.A corollary states that if is a group homomorphism, then 1. is injective iff 2. , where denotes the group order of a group .
For a subgroup of a group , the index of , denoted , is the cardinal number of the set of left cosets of in (which is equal to the cardinal number of the set of right cosets of in ).
For an atomic integral domain (i.e., one in which every nonzero nonunit can be factored as a product of irreducible elements) with the set of irreducible elements, the elasticity of is defined as
Let be a permutation group on a set and be an element of . Then(1)is called the stabilizer of and consists of all the permutations of that produce group fixed points in , i.e., that send to itself. For example, the stabilizer of 1 and of 2 under the permutation group is both , and the stabilizer of 3 and of 4 is .More generally, the subset of all images of under permutations of the group (2)is called the group orbit of in .A group's action on an group orbit through is transitive, and so is related to its isotropy group. In particular, the cosets of the isotropy subgroup correspond to the elements in the orbit,(3)where is the orbit of in and is the stabilizer of in . This immediately gives the identity(4)where denotes the order of group (Holton and Sheehan 1993, p. 27)...
The dimension of any irreducible representation of a group must be a divisor of the index of each maximal normal Abelian subgroup of .Note that while Itô's theorem was proved by Noboru Itô, Ito'slemma was proven by Kiyoshi Ito.
Let be group of group order and be a set of elements of . If the set of differences contains every nonzero element of exactly times, then is a -difference set in of order .
The socle of a group is the subgroup generated by its minimal normal subgroups. For example, the symmetric group has two nontrivial normal subgroups: and . But contains , so is the only minimal subgroup, and the socle of is .
Two groups are isomorphic if the correspondence between them is one-to-one and the "multiplication" table is preserved. For example, the point groups and are isomorphic groups, written or (Shanks 1993).
Let be a group of group order and be a set of elements of . If the set of differences contains every nonzero element of exactly times, then is a -difference set in of order . If , the difference set is called planar. The quadratic residues in the finite field form a difference set. If there is a difference set of size in a group , then must be a multiple of , where is a binomial coefficient.Gordon maintains an index of known difference sets.
A -element of a group is semisimple if , where is the commuting product of all components of and is the centralizer of .
An invariant series of a group is a normal seriessuch that each , where means that is a normal subgroup of .
A "split" extension of groups and which contains a subgroup isomorphic to with and (Ito 1987, p. 710). Then the semidirect product of a group by a group , denoted (or sometimes ) with homomorphism is given bywhere , , and (Suzuki 1982, p. 67; Scott 1987, p. 213). Note that the semidirect product of two groups is not uniquely defined.The semidirect product of a group by a group can also be defined as a group which is the product of its subgroups and , where is normal in and . If is also normal in , then the semidirect product becomes a group direct product (Shmel'kin 1988, p. 247).
The identity element (also denoted , , or 1) of a group or related mathematical structure is the unique element such that for every element . The symbol "" derives from the German word for unity, "Einheit." An identity element is also called a unit element.
If a discrete group of displacements in the plane has more than one center of rotation, then the only rotations that can occur are by 2, 3, 4, and 6. This can be shown as follows. It must be true that the sum of the interior angles divided by the number of sides is a divisor of .where is an integer. Therefore, symmetry will be possible only forwhere is an integer. This will hold for 1-, 2-, 3-, 4-, and 6-fold symmetry. That it does not hold for is seen by noting that corresponds to . The case requires that (impossible), and the case requires that (also impossible).The point groups that satisfy the crystallographic restriction are called crystallographic point groups.Although -fold rotations for differing from 2, 3, 4, and 6 are forbidden in the strict sense of perfect crystallographic symmetry, there are exotic materials called quasicrystals that display these symmetries. In 1984, D. Shechtman discovered a class of aluminum alloys whose X-ray..
The second, or diamond, group isomorphism theorem, states that if is a group with , and , then and , where indicates that is a normal subgroup of and indicates that and are isomorphic groups.This theorem is so named because of the diamond shaped lattice of subgroups of involved.
A permutation group is -homogeneous if it is transitive on unordered k-subsets of .The projective special linear group is 3-homogeneous if .
The cross number of a zero-system of is defined asThe cross number of a group has two different definitions. 1. Anderson and Chapman (2000) define the cross number of as . 2. Chapman (1997) defines , where . A value of the cross number: for a prime and , . A stronger statement is that any finite Abelian group is cyclic of prime power order iff .
If on and on are irreducible representations and is a linear map such that for all and group , then or is invertible. Furthermore, if in a vector space over complex numbers, then is a scalar.
For a subgroup of a group and an element of , define to be the set and to be the set . A subset of of the form for some is said to be a left coset of and a subset of the form is said to be a right coset of .For any subgroup , we can define an equivalence relation by if for some . The equivalence classes of this equivalence relation are exactly the left cosets of , and an element of is in the equivalence class . Thus the left cosets of form a partition of .It is also true that any two left cosets of have the same cardinal number, and in particular, every coset of has the same cardinal number as , where is the identity element. Thus, the cardinal number of any left coset of has cardinal number the order of .The same results are true of the right cosets of and, in fact, one can prove that the set of left cosets of has the same cardinal number as the set of right cosets of ...
Let be a subgroup of . A subset of elements of is called a right transversal of if contains exactly one element of each right coset of .
If is a group, then the torsion elements of (also called the torsion of ) are defined to be the set of elements in such that for some natural number , where is the identity element of the group .In the case that is Abelian, is a subgroup and is called the torsion subgroup of . If consists only of the identity element, the group is called torsion-free.
Consider a countable subgroup with elements and an element not in , then for , 2, ... constitute the right coset of the subgroup with respect to .
Given a group with elements and , there must be an element which is a similarity transformation of so and are conjugate with respect to . Conjugate elements have the following properties: 1. Every element is conjugate with itself. 2. If is conjugate with with respect to , then is conjugate to with respect to . 3. If is conjugate with and , then and are conjugate with each other.
Each row and each column in the group multiplication table lists each of the group elements once and only once. From this, it follows that no two elements may be in the identical location in two rows or two columns. Thus, each row and each column is a rearranged list of the group elements. Stated otherwise, given a group of distinct elements , the set of products reproduces the original distinct elements in a new order.
For any prime number and any positive integer , the -rank of a finitely generated Abelian group is the number of copies of the cyclic group appearing in the Kronecker decomposition of (Schenkman 1965). The free (or torsion-free) rank of is the number of copies of appearing in the same decomposition. It can be characterized as the maximal number of elements of which are linearly independent over . Since it is also equal to the dimension of as a vector space over , it is often called the rational rank of . Munkres (1984) calls it the Betti number of .Most authors refer to simply as the "rank" of (Kargapolov and Merzljakov 1979), whereas others (Griffith 1970) use the word "rank" to denote the sum . In this latter meaning, the rank of is the number of direct summands appearing in the Kronecker decomposition of ...
In the classical quasithin case of the quasithin theorem, if a group does not have a "strongly embedded" subgroup, then is a group of Lie-type in characteristic 2 of Lie rank 2 generated by a pair of parabolic subgroups and , or is one of a short list of exceptions.
A complete set of mutually conjugate group elements. Each element in a group belongs to exactly one class, and the identity element () is always in its own class. The conjugacy class orders of all classes must be integral factors of the group order of the group. From the last two statements, a group of prime order has one class for each element. More generally, in an Abelian group, each element is in a conjugacy class by itself.Two operations belong to the same class when one may be replaced by the other in a new coordinate system which is accessible by a symmetry operation (Cotton 1990, p. 52). These sets correspond directly to the sets of equivalent operations.To see how to compute conjugacy classes, consider the dihedral group D3, which has the following multiplication table.11111111 is always in a conjugacy class of its own. To find another conjugacy class take some element, say , and find the results of all similarity transformations on . For..
Let be a unitary representation of a group on a separable Hilbert space, and let be the smallest weakly closed algebra of bounded linear operators containing all for . Then is primary if the center of consists of only scalar operations.
Let be a representation for a group of group order , thenThe proof is nontrivial and may be found in Eyring et al. (1944).
Every finite group of order greater than one possesses a finite series of subgroups, called a composition series, such thatwhere is a maximal subgroup of and means that is a normal subgroup of . A composition series is therefore a normal series without repetition whose factors are all simple (Scott 1987, p. 36).The quotient groups , , ..., , are called composition quotient groups.
In celestial mechanics, the fixed path a planet traces as it moves around the sun is called an orbit. When a group acts on a set (this process is called a group action), it permutes the elements of . Any particular element moves around in a fixed path which is called its orbit. In the notation of set theory, the group orbit of a group element can be defined as(1)where runs over all elements of the group . For example, for the permutation group , the orbits of 1 and 2 are and the orbits of 3 and 4 are .A group fixed point is an orbit consisting of a single element, i.e., an element that is sent to itself under all elements of the group. The stabilizer of an element consists of all the permutations of that produce group fixed points in , i.e., that send to itself. The stabilizers of 1 and 2 under are therefore , and the stabilizers of 3 and 4 are .Note that if then , because iff . Consequently, the orbits partition and, given a permutation group on a set , the orbit of an element is..
Two representations of a group and are said to be orthogonal iffor , where the sum is over all elements of the representation.
The centralizer of an element of a group is the set of elements of which commute with ,Likewise, the centralizer of a subgroup of a group is the set of elements of which commute with every element of ,The centralizer always contains the group center of the group and is contained in the corresponding normalizer. In an Abelian group, the centralizer is the whole group.
The set of elements of a group such thatis said to be the normalizer with respect to a subset of group elements . If is a subgroup of , is also a subgroup containing .
A set of generators is a set of group elements such that possibly repeated application of the generators on themselves and each other is capable of producing all the elements in the group. Cyclic groups can be generated as powers of a single generator. Two elements of a dihedral group that do not have the same sign of ordering are generators for the entire group.The Cayley graph of a group and a subset of elements (excluding the identity element) is connected iff the subset generates the group.
A normal series of a group is a finite sequence of normal subgroups such that
The set of points of fixed by a group action are called the group's set of fixed points, defined byIn some cases, there may not be a group action, but a single operator . Then still makes sense even when is not invertible (as is the case in a group action).
Let be a group with normal series (, , ..., ). A normal factor of is a quotient group for some index . is a solvable group iff all normal factors are Abelian.
A cycle of a finite group is a minimal set of elements such that , where is the identity element. A diagram of a group showing every cycle in the group is known as a cycle graph (Shanks 1993, p. 83).For example, the modulo multiplication group (i.e., the group of residue classes relatively prime to 5 under multiplication mod 5) has elements and cycles , , , and . The corresponding cycle graph is illustrated above.
In Note M, Burnside (1955) states, "The contrast that these results shew between groups of odd and of even order suggests inevitably that simple groups of odd order do not exist." Of course, simple groups of prime order do exist, namely the groups for any prime . Therefore, Burnside conjectured that every finite simple group of non-prime order must have even order. The conjecture was proven true by Feit and Thompson (1963).
The convolution of two complex-valued functions on a group is defined aswhere the support (set which is not zero) of each functionis finite.
The group theoretical term for what is known to physicists, by way of its connection with matrix traces, as the trace. The powerful group orthogonality theorem gives a number of important properties about the structures of groups, many of which are most easily expressed in terms of characters. In essence, group characters can be thought of as the matrix traces of a special set of matrices (a so-called irreducible representation) used to represent group elements and whose multiplication corresponds to the multiplication table of the group.All members of the same conjugacy class in the same representation have the same character. Members of other conjugacy classes may also have the same character, however. An (abstract) group can be identified by a listing of the characters of its various representations, known as a character table. However, there exist nonisomorphic groups which nevertheless have the same character table, for example (the..
In a monoid or multiplicative group where the operation is a product , the multiplicative inverse of any element is the element such that , with 1 the identity element.The multiplicative inverse of a nonzero number is its reciprocal (zero is not invertible). For complex ,The inverse of a nonzero real quaternion (where are real numbers, and not all of them are zero) is its reciprocalwhere .The multiplicative inverse of a nonsingular matrixis its matrix inverse.To detect the multiplicative inverse of a given element in the multiplication table of finite multiplicative group, traverse the element's row until the identity element 1 is encountered, and then go up to the top row. In this way, it can be immediately determined that is the multiplicative inverse of in the multiplicative group formed by all complex fourth roots of unity...
Suppose that (the commuting product of all components of ) is simple and contains a semisimple group involution. Then there is some semisimple group involution such that has a normal subgroup which is either quasisimple or isomorphic to and such that is tightly embedded.
If a Sylow 2-subgroup of lies in a unique maximal 2-local of , then is a "strongly embedded" subgroup of , and is known.
If a map from a group to a group satisfies for all , then is said to be an antihomomorphism.
Let be a free Abelian semigroup, where is the identity element, and let be the Möbius function. Define on the elements of the semigroup analogously to the definition of (as if is the product of distinct primes) by regarding generators of the semigroup as primes. Then the Möbius problem asks if the properties 1. implies for , where has the linear order , 2. for all , imply thatfor all . Informally, the problem asks "Is the multiplication law on the positive integers uniquely determined by the values of the Möbius function and the property that multiplication respects order?The problem is known to be true for all if for all (Flath and Zulauf 1995).
If a map from a group to a group satisfies for all , then is said to be an antihomomorphism. Moreover, if and are isomorphic, then is said to be an antiautomorphism.
Let be a maximal torus of a group , then intersects every conjugacy class of , i.e., every element is conjugate to a suitable element in . The theorem is due to É. Cartan.
In an additive group , the additive inverse of an element is the element such that , where 0 is the additive identity of . Usually, the additive inverse of is denoted , as in the additive group of integers , of rationals , of real numbers , and of complex numbers , where The same notation with the minus sign is used to denote the additive inverse of a vector,(1)of a polynomial,(2)of a matrix(3)and, in general, of any element in an abstractvector space or a module.
The identity element of an additive group , usually denoted 0. In the additive group of vectors, the additive identity is the zero vector , in the additive group of polynomials it is the zero polynomial , in the additive group of matrices it is the zero matrix.
Let be a subgroup of . A subset of elements of is called a left transversal of if contains exactly one element of each left coset of .
An abstract group is a group characterized only by its abstract properties and not by the particular representations chosen for elements. For example, there are two distinct abstract groups on four elements: the vierergruppe and the cyclic group C4. A number of particular examples of the abstract group are the point groups (unfortunately, the symbols for the point groups are the same as those for the abstract cyclic groups to which they are isomorphic) and .
For a group , consider a subgroup with elements and an element of not in , then for , 2, ... constitute the left coset of the subgroup with respect to .
An inner automorphism of a group is an automorphism of the form , where is a fixed element of .The automorphism of the symmetric group that maps the permutation to is an inner automorphism, since .
A group action is transitive if it possesses only a single group orbit, i.e., for every pair of elements and , there is a group element such that . In this case, is isomorphic to the left cosets of the isotropy group, . The space , which has a transitive group action, is called a homogeneous space when the group is a Lie group.If, for every two pairs of points and , there is a group element such that , then the group action is called doubly transitive. Similarly, a group action can be triply transitive and, in general, a group action is -transitive if every set of distinct elements has a group element such that .
Given an matrix and a matrix , their Kronecker product , also called their matrix direct product, is an matrix with elements defined by(1)where(2)(3)For example, the matrix direct product of the matrix and the matrix is given by the following matrix,(4)(5)The matrix direct product is implemented in the Wolfram Language as KroneckerProduct[a, b].The matrix direct product gives the matrix of the linear transformation induced by the vector space tensor product of the original vector spaces. More precisely, suppose that(6)and(7)are given by and . Then(8)is determined by(9)
Symbols used to identify irreducible representations of groups: singly degenerate state which is symmetric with respect to rotation about the principal axis, singly degenerate state which is antisymmetric with respect to rotation about the principal axis, doubly degenerate, triply degenerate, (gerade, symmetric) the sign of the wavefunction does not change on inversion through the center of the atom, (ungerade, antisymmetric) the sign of the wavefunction changes on inversion through the center of the atom, (on or ) the sign of the wavefunction does not change upon rotation about the center of the atom, (on or ) the sign of the wavefunction changes upon rotation about the center of the atom, ' = symmetric with respect to a horizontal symmetry plane , " = antisymmetric with respect to a horizontal symmetry plane . ..
A Vandermonde matrix is a type of matrix that arises in the polynomial least squares fitting, Lagrange interpolating polynomials (Hoffman and Kunze p. 114), and the reconstruction of a statistical distribution from the distribution's moments (von Mises 1964; Press et al. 1992, p. 83). A Vandermonde matrix of order is of the form(Press et al. 1992; Meyer 2000, p. 185). A Vandermonde matrix is sometimes also called an alternant matrix (Marcus and Minc 1992, p. 15). Note that some authors define the transpose of this matrix as the Vandermonde matrix (Marcus and Minc 1992, p. 15; Golub and Van Loan 1996; Aldrovandi 2001, p. 193).The solution of an Vandermonde matrix equation requires operations. The determinants of Vandermonde matrices have a particularly simple form...
A complex matrix is said to be Hamiltonian if(1)where is the matrix of the form(2) is the identity matrix, and denotes the conjugate transpose of a matrix . An analogous definition holds in the case of real matrices by requiring that be symmetric, i.e., by replacing by in (1).Note that this criterion specifies precisely how a Hamiltonian matrix must look. Indeed, every Hamiltonian matrix (here assumed to have complex entries) must have the form(3)where satisfy and . This characterization holds for having strictly real entries as well by replacing all instances of the conjugate transpose operator in (1) by the transpose operator instead.
A triangular matrix of the form(1)Written explicitly,(2)An upper triangular matrix with elements f[i,j] above the diagonal could be formed in versions of the Wolfram Language prior to 6 using UpperDiagonalMatrix[f, n], which could be run after first loading LinearAlgebra`MatrixManipulation`.A strictly upper triangular matrix is an upper triangular matrix having 0s along the diagonal as well, i.e., for .
A positive matrix is a real or integer matrix for which each matrix element is a positive number, i.e., for all , .Positive matrices are therefore a subset of nonnegativematrices.Note that a positive matrix is not the same as a positivedefinite matrix.
Given a set of vectors (points in ), the Gram matrix is the matrix of all possible inner products of , i.e.,where denotes the transpose.The Gram matrix determines the vectors up to isometry.
A unimodular matrix is a real square matrix with determinant (Born and Wolf 1980, p. 55; Goldstein 1980, p. 149). More generally, a matrix with elements in the polynomial domain of a field is called unimodular if it has an inverse whose elements are also in . A matrix is therefore unimodular iff its determinant is a unit of (MacDuffee 1943, p. 137).The matrix inverse of a unimodular realmatrix is another unimodular matrix.There are an infinite number of unimodular matrices not containing any 0s or . One parametric family is(1)Specific examples of unimodular matrices having small positive integer entries include(2)(Guy 1989, 1994).The th power of a unimodular matrix(3)is given by(4)where(5)and the are Chebyshev polynomials of the second kind,(6)(Born and Wolf 1980, p. 67)...
A square matrix with nonzero elements only on the diagonal and slots horizontally or vertically adjacent the diagonal (i.e., along the subdiagonal and superdiagonal),Computing the determinant of such a matrix requires only (as opposed to ) arithmetic operations (Acton 1990, p. 332). Efficient solution of the matrix equation for , where is a tridiagonal matrix, can be performed in the Wolfram Language using LinearSolve on , represented as a SparseArray.
An complex matrix is called positive definite if(1)for all nonzero complex vectors , where denotes the conjugate transpose of the vector . In the case of a real matrix , equation (1) reduces to(2)where denotes the transpose. Positive definite matrices are of both theoretical and computational importance in a wide variety of applications. They are used, for example, in optimization algorithms and in the construction of various linear regression models (Johnson 1970).A linear system of equations with a positive definite matrix can be efficiently solved using the so-called Cholesky decomposition. A positive definite matrix has at least one matrix square root. Furthermore, exactly one of its matrix square roots is itself positive definite.A necessary and sufficient condition for a complex matrix to be positive definite is that the Hermitian part(3)where denotes the conjugate transpose, be positive definite. This means that a real matrix..
An upper triangular matrix is defined by(1)Written explicitly,(2)A lower triangular matrix is defined by(3)Written explicitly,(4)
A permutation matrix is a matrix obtained by permuting the rows of an identity matrix according to some permutation of the numbers 1 to . Every row and column therefore contains precisely a single 1 with 0s everywhere else, and every permutation corresponds to a unique permutation matrix. There are therefore permutation matrices of size , where is a factorial.The permutation matrices of order two are given by(1)and of order three are given by(2)A permutation matrix is nonsingular, and the determinant is always . In addition, a permutation matrix satisfies(3)where is a transpose and is the identity matrix.Applied to a matrix , gives with rows interchanged according to the permutation vector , and gives with the columns interchanged according to the given permutation vector.Interpreting the 1s in an permutation matrix as rooks gives an allowable configuration of nonattacking rooks on an chessboard. However, the permutation matrices provide..
Given numbers , where , ..., , 0, 1, ..., , a Toeplitz matrix is a matrix which has constant values along negative-sloping diagonals, i.e., a matrix of the formMatrix equations ofthe formcan be solved with operations. Typical problems modelled by Toeplitz matrices include the numerical solution of certain differential and integral equations (regularization of inverse problems), the computation of splines, time series analysis, signal and image processing, Markov chains, and queuing theory (Bini 1995).
A square matrix such that the matrix power for a positive integer is called a periodic matrix. If is the least such integer, then the matrix is said to have period . If , then and is called idempotent.
Two matrices and are equal to each other, written , if they have the same dimensions and the same elements for , ..., and , ..., .Gradshteyn and Ryzhik (2000) call an matrix "equivalent" to another matrix ifffor and any suitable nonsingular and matrices, respectively.
The Pauli matrices, also called the Pauli spin matrices, are complex matrices that arise in Pauli's treatment of spin in quantum mechanics. They are defined by(1)(2)(3)(Condon and Morse 1929, p. 213; Gasiorowicz 1974, p. 232; Goldstein 1980, p. 156; Liboff 1980, p. 453; Arfken 1985, p. 211; Griffiths 1987, p. 115; Landau and Lifschitz 1991, p. 204; Landau 1996, p. 224).The Pauli matrices are implemented in the Wolfram Language as PauliMatrix[n], where , 2, or 3.The Pauli spin matrices satisfy the identities(4)(5)(6)where is the identity matrix, is the Kronecker delta, is the permutation symbol, the leading is the imaginary unit (not the index ), and Einstein summation is used in (6) to sum over the index (Arfken 1985, p. 211; Griffiths 1987, p. 139; Landau and Lifschitz 1991, pp. 204-205).The Pauli matrices plus the identity matrix form a complete set, so any matrix..
An matrix is an elementary matrix if it differs from the identity by a single elementary row or column operation.
A submatrix of an matrix (with , ) is a matrix formed by taking a block of the entries of this size from the original matrix.
A strictly upper triangular matrix is an upper triangular matrix having 0s along the diagonal as well as the lower portion, i.e., a matrix such that for . Written explicitly,
A doubly stochastic matrix is a matrix such that andis some field for all and . In other words, both the matrix itself and its transpose are stochastic.The following tables give the number of distinct doubly stochastic matrices (and distinct nonsingular doubly stochastic matrices) over for small .doubly stochastic matrices over 21, 2, 16, 512, ...31, 3, 81, ...41, 4, 256, ...doubly stochastic nonsingular matrices over 21, 2, 6, 192, ...31, 2, 54, ...41, 4, 192, ...Horn (1954) proved that if , where and are complex -vectors, is doubly stochastic, and , , ..., are any complex numbers, then lies in the convex hull of all the points , , where is the set of all permutations of . Sherman (1955) also proved the converse.Birkhoff (1946) proved that any doubly stochastic matrix is in the convex hull of permutation matrices for . There are several proofs and extensions of this result (Dulmage and Halperin 1955, Mendelsohn and Dulmage 1958, Mirsky 1958, Marcus..