Sort by:

Matrix exponential

The power series that defines the exponential map also defines a map between matrices. In particular,(1)(2)(3)converges for any square matrix , where is the identity matrix. The matrix exponential is implemented in the Wolfram Language as MatrixExp[m].The Kronecker sum satisfies the nice property(4)(Horn and Johnson 1994, p. 208).Matrix exponentials are important in the solution of systems of ordinary differential equations (e.g., Bellman 1970).In some cases, it is a simple matter to express the matrix exponential. For example, when is a diagonal matrix, exponentiation can be performed simply by exponentiating each of the diagonal elements. For example, given a diagonal matrix(5)The matrix exponential is given by(6)Since most matrices are diagonalizable,it is easiest to diagonalize the matrix before exponentiating it.When is a nilpotent matrix, the exponential is given by a matrix polynomial because some power of vanishes...

Rational canonical form

Any square matrix has a canonical form without any need to extend the field of its coefficients. For instance, if the entries of are rational numbers, then so are the entries of its rational canonical form. (The Jordan canonical form may require complex numbers.) There exists a nonsingular matrix such that(1)called the rational canonical form, where is the companion matrix for the monic polynomial(2)The polynomials are called the "invariant factors" of , and satisfy for , ..., (Hartwig 1996). The polynomial is the matrix minimal polynomial and the product is the characteristic polynomial of .The rational canonical form is unique, and shows the extent to which the minimal polynomial characterizes a matrix. For example, there is only one matrix whose matrix minimal polynomial is , which is(3)in rational canonical form.Given a linear transformation , the vector space becomes a -module, that is a module over the ring of polynomials..

Jordan basis

Given a matrix , a Jordan basis satisfiesandand provides the means by which any complex matrix can be written in Jordan canonical form.

Lie group quotient space

The set of left cosets of a subgroup of a topological group forms a topological space. Its topology is defined by the quotient topology from . Namely, the open sets in are the images of the open sets in . Moreover, if is closed, then is a T2-space.

Solvable lie group

A solvable Lie group is a Lie group which is connected and whose Lie algebra is a solvable Lie algebra. That is, the Lie algebra commutator series(1)eventually vanishes, for some . Since nilpotent Lie algebras are also solvable, any nilpotent Lie group is a solvable Lie group.The basic example is the group of invertible upper triangular matrices with positive determinant, e.g.,(2)such that . The Lie algebra of is its tangent space at the identity matrix, which is the vector space of all upper triangular matrices, and it is a solvable Lie algebra. Its Lie algebra commutator series is given by(3)(4)(5)Any real solvable Lie group is diffeomorphic to Euclidean space. For instance, the group of matrices in the example above is diffeomorphic to , via the Lie group exponential map. However, in general, the exponential map in a solvable Lie algebra need not be surjective...

Lie group

A Lie group is a smooth manifold obeying the group properties and that satisfies the additional condition that the group operations are differentiable.This definition is related to the fifth of Hilbert's problems, which asks if the assumption of differentiability for functions defining a continuous transformation group can be avoided.The simplest examples of Lie groups are one-dimensional. Under addition, the real line is a Lie group. After picking a specific point to be the identity element, the circle is also a Lie group. Another point on the circle at angle from the identity then acts by rotating the circle by the angle In general, a Lie group may have a more complicated group structure, such as the orthogonal group (i.e., the orthogonal matrices), or the general linear group (i.e., the invertible matrices). The Lorentz group is also a Lie group.The tangent space at the identity of a Lie group always has the structure of a Lie algebra, and this..

Nilpotent lie group

A nilpotent Lie group is a Lie group which is connected and whose Lie algebra is a nilpotent Lie algebra . That is, its Lie algebra lower central series(1)eventually vanishes, for some . So a nilpotent Lie group is a special case of a solvable Lie group.The basic example is the group of uppertriangular matrices with 1s on their diagonals, e.g.,(2)which is called the Heisenberg group. Its Lie algebra lower central series is given by(3)(4)(5)Any real nilpotent Lie group is diffeomorphic to Euclidean space. For instance, the group of matrices in the example above is diffeomorphic to , via the Lie group exponential map. In general, the exponential map of a nilpotent Lie algebra is surjective, in contrast to the more general solvable Lie group.

Critical pair

Let and be two rules of a term rewriting system, and suppose these rules have no variables in common. If they do, rename the variables. If is a subterm of (or the term itself) such that it is not a variable, and the pair is unifiable with the most general unifier , then and the result of replacing in by are called a critical pair.The fact that all critical pairs of a term rewriting system are joinable, i.e., can be reduced to the same expression, implies that the system is locally confluent.For instance, if and , then and would form a critical pair because they can both be derived from .Note that it is possible for a critical pair to be produced by one rule, used in two different ways. For instance, in the string rewrite "AA" -> "B", the critical pair ("BA", "AB") results from applying the one rule to "AAA" in two different ways...

Computational reducibility

Some computations allow shortcuts which can be used to speed them up. Consider the operation of raising a number to a positive integer power. It is possible, for example, to calculate by multiplying 13 by itself seven times,However, the shortcut of squaring three times considerably speeds up the computation,It is often quite difficult to determine whether a given computation can be sped up by means of such a trick. Computations that cannot be sped up are said to exhibit computational irreducibility.

Computational irreducibility

While many computations admit shortcuts that allow them to be performed more rapidly, others cannot be sped up. Computations that cannot be sped up by means of any shortcut are called computationally irreducible. The principle of computational irreducibility says that the only way to determine the answer to a computationally irreducible question is to perform, or simulate, the computation. Some irreducible computations can be sped up by performing them on faster hardware, as the principle refers only to computation time.According to Wolfram (2002, p. 741), if the behavior of a system is obviously simple--and is say either repetitive or nested--then it will always be computationally reducible. But it follows from the principle of computational equivalence that in practically all other cases it will be computationally irreducible." Here, "practically all" refers to cases that arise naturally or from a simple..


Universality is the property of being able to perform different tasks with the same underlying construction just by being programmed in a different way. Universal systems are effectively capable of emulating any other system. Digital computers are universal, but proving that idealized computational systems are universal can be extremely difficult and technical. Nonetheless, examples have been found in many systems, and any system that can be translated into another system known to be universal must itself be universal. Specific universal Turing machines, universal cellular automata (in both one and two dimensions), and universal cyclic tag systems are known, although the smallest universal example is known only in the case of elementary cellular automata (Wolfram 2002, Cook 2004).

Multiway system

A multiway system is a kind of substitution system in which multiple states are permitted at any stage. This accommodates rule systems in which there is more than one possible way to perform an update.A simple example is a string substitution system. For instance, take the rules and the initial condition . There are two choices for how to proceed. Applying the first rule yields the evolution , while applying the second rule yields the evolution . So at the first step, there is a single state (), at the second step there are two states , and at the third step there is a single state .A path through a multiway system arising from a choice of which substitutions to make is called an evolution. Typically, a multiway system will have a large number of possible evolutions. For example, consider strings of s and s with the rule . Then most strings will have more than one occurrence of the substring , and each occurrence leads down another path in the multiway system...

Causal network

A causal network is an acyclic digraph arising from an evolution of a substitution system, and representing its history. The illustration above shows a causal network corresponding to the rules (applied in a left-to-right scan) and initial condition (Wolfram 2002, p. 498, fig. a).The figure above shows the procedure for diagrammatically creating a causal network from a mobile automaton (Wolfram 2002, pp. 488-489).In an evolution of a multiway system, each substitution event is a vertex in a causal network. Two events which are related by causal dependence, meaning one occurs just before the other, have an edge between the corresponding vertices in the causal network. More precisely, the edge is a directed edge leading from the past event to the future event.Some causal networks are independent of the choice of evolution, and these are calledcausally invariant...

Causal invariance

A multiway system that generates causal networks which are all isomorphic as acyclic digraphs is said to exhibit causal invariance, and the causal network itself is also said to be causally invariant. Essentially, causal invariance means that no matter which evolution is chosen for a system, the history is the same in the sense that the same events occur and they have the same causal relationships. The figures above illustrate two nontrivial substitution systems that exhibit the same causal networks independent of the order in which the rules are applied (Wolfram 2002, pp. 500-501).Whenever two rule hypotheses overlap in an evolution, the corresponding system is not causally invariant. Hence, the simplest way to search for causal invariance is to use rules whose hypotheses can never overlap except trivially. An overlap can involve one or two strings. For example, does not have any overlaps. However, can overlap as , and the set of strings..

Nilpotent lie algebra

A Lie algebra is nilpotent when its Lie algebra lower central series vanishes for some . Any nilpotent Lie algebra is also solvable. The basic example of a nilpotent Lie algebra is the vector space of strictly upper triangular matrices, such as the Lie algebra of the Heisenberg group.

Lie algebra root

The roots of a semisimple Lie algebra are the Lie algebra weights occurring in its adjoint representation. The set of roots form the root system, and are completely determined by . It is possible to choose a set of Lie algebra positive roots, every root is either positive or is positive. The Lie algebra simple roots are the positive roots which cannot be written as a sum of positive roots.The simple roots can be considered as a linearly independent finite subset of Euclidean space, and they generate the root lattice. For example, in the special Lie algebra of two by two matrices with zero matrix trace, has a basis given by the matrices(1)The adjoint representation is given bythe brackets(2)(3)so there are two roots of given by and . The Lie algebraic rank of is one, and it has one positive root...

Solvable lie algebra

A Lie algebra is solvable when its Lie algebra commutator series, or derived series, vanishes for some . Any nilpotent Lie algebra is solvable. The basic example is the vector space of upper triangular matrices, because every time two such matrices commute, their nonzero entries move further from the diagonal.

Lie algebra representation

A representation of a Lie algebra is a linear transformationwhere is the set of all linear transformations of a vector space . In particular, if , then is the set of square matrices. The map is required to be a map of Lie algebras so thatfor all . Note that the expression only makes sense as a matrix product in a representation. For example, if and are antisymmetric matrices, then is skew-symmetric, but may not be antisymmetric.The possible irreducible representations of complex Lie algebras are determined by the classification of the semisimple Lie algebras. Any irreducible representation of a complex Lie algebra is the tensor product , where is an irreducible representation of the quotient of the algebra and its Lie algebra radical, and is a one-dimensional representation.A Lie algebra may be associated with a Lie group, in which case it reflects the local structure of the Lie group. Whenever a Lie group has a group representation on , its tangent..

Cartan matrix

A Cartan matrix is a square integer matrix who elements satisfy the following conditions. 1. is an integer, one of . 2. the diagonal entries are all 2. 3. off of the diagonal. 4. iff . 5. There exists a diagonal matrix such that gives a symmetric and positive definite quadratic form. A Cartan matrix can be associated to a semisimple Lie algebra . It is a square matrix, where is the Lie algebra rank of . The Lie algebra simple roots are the basis vectors, and is determined by their inner product, using the Killing form.(1)In fact, it is more a table of values than a matrix. By reordering the basis vectors, one gets another Cartan matrix, but it is considered equivalent to the original Cartan matrix.The Lie algebra can be reconstructed, up to isomorphism, by the generators which satisfy the Chevalley-Serre relations. In fact,(2)where are the Lie subalgebras generated by the generators of the same letter.For example,(3)is a Cartan matrix. The Lie algebra..

Root system

Let be a Euclidean space, be the dot product, and denote the reflection in the hyperplane bywhereThen a subset of the Euclidean space is called a root system in if: 1. is finite, spans , and does not contain 0, 2. If , the reflection leaves invariant, and 3. If , then . The Lie algebra roots of a semisimple Lie algebra are a root system, in a real subspace of the dual vector space to the Cartan subalgebra. In this case, the reflections generate the Weyl group, which is the symmetry group of the root system.

Lie algebra lower central series

The lower central series of a Lie algebra is the sequence of subalgebras recursively defined by(1)with . The sequence of subspaces is always decreasing with respect to inclusion or dimension, and becomes stable when is finite dimensional. The notation means the linear span of elements of the form , where and .When the lower central series ends in the zero subspace, the Lie algebra is called nilpotent. For example, consider the Lie algebra of strictly upper triangular matrices, then(2)(3)(4)(5)and . By definition, , where is the term in the Lie algebra commutator series, as can be seen by the example above.In contrast to the nilpotent Lie algebras, the semisimple Lie algebras have a constant lower central series. Others are in between, e.g.,(6)which is semisimple, because the matrix trace satisfies(7)Here, is a general linear Lie algebra and is the special linear Lie algebra...

Lie algebra commutator series

The commutator series of a Lie algebra , sometimes called the derived series, is the sequence of subalgebras recursively defined by(1)with . The sequence of subspaces is always decreasing with respect to inclusion or dimension, and becomes stable when is finite dimensional. The notation means the linear span of elements of the form , where and .When the commutator series ends in the zero subspace, the Lie algebra is called solvable. For example, consider the Lie algebra of strictly upper triangular matrices, then(2)(3)(4)and . By definition, where is the term in the Lie algebra lower central series, as can be seen by the example above.In contrast to the solvable Lie algebras, the semisimple Lie algebras have a constant commutator series. Others are in between, e.g.,(5)which is semisimple, because the matrix trace satisfies(6)Here, is a general linear Lie algebra and is the special linear Lie algebra...

Adjoint representation

A Lie algebra is a vector space with a Lie bracket , satisfying the Jacobi identity. Hence any element gives a linear transformation given by(1)which is called the adjoint representation of . It is a Lie algebra representation because of the Jacobi identity,(2)(3)(4)A Lie algebra representation is given by matrices. The simplest Lie algebra is the set of matrices. Consider the adjoint representation of , which has four dimensions and so will be a four-dimensional representation. The matrices(5)(6)(7)(8)give a basis for . Using this basis, the adjoint representation is described by the following matrices:(9)(10)(11)(12)

Wolfram axiom

A single axiom that is satisfied only by NAND or NOR must be of the form "something equals ," since otherwise constant functions would satisfy the equation. With up to six NANDs and two variables, none of the possible axiom systems of this kind work even up to 3-value operators. But with 6 NANDS and 3 variables, 296 of the possible axiom systems work up to 3-value operators, and 100 work up to 4-value operators (Wolfram 2002, p. 809).Of the 25 of these that are not trivially equivalent, it then turns out that only the Wolfram axiomand the axiomwhere denotes the NAND operator, are equivalent to the axioms of Boolean algebra (Wolfram 2002, pp. 808-811 and 1174). These candidate axioms were identified by S. Wolfram in 2000, who also proved that there were no smaller candidates...


Given a vector space , its projectivization , sometimes written , is the set of equivalence classes for any in . For example, complex projective space has homogeneous coordinates , with not all .The projectivization is a manifold with one less dimension than . In fact, it is covered by the affine coordinate charts,

Kähler potential

The Kähler potential is a real-valued function on a Kähler manifold for which the Kähler form can be written as . Here, the operators(1)and(2)are called the del and del bar operator, respectively.For example, in , the function is a Kähler potential for the standard Kähler form, because(3)(4)(5)(6)


Let , , ... be operators. Then the commutator of and is defined as(1)Let , , ... be constants, then identities include(2)(3)(4)(5)(6)(7)(8)Let and be tensors. Then(9)There is a related notion of commutator in the theory of groups. The commutator of two group elements and is , and two elements and are said to commute when their commutator is the identity element. When the group is a Lie group, the Lie bracket in its Lie algebra is an infinitesimal version of the group commutator. For instance, let and be square matrices, and let and be paths in the Lie group of nonsingular matrices which satisfy(10)(11)(12)then(13)

Direct sum

Direct sums are defined for a number of different sorts of mathematical objects, including subspaces, matrices, modules, and groups.The matrix direct sum is defined by(1)(2)(Ayres 1962, pp. 13-14).The direct sum of two subspaces and is the sum of subspaces in which and have only the zero vector in common (Rosen 2000, p. 357).The significant property of the direct sum is that it is the coproduct in the category of modules (i.e., a module direct sum). This general definition gives as a consequence the definition of the direct sum of Abelian groups and (since they are -modules, i.e., modules over the integers) and the direct sum of vector spaces (since they are modules over a field). Note that the direct sum of Abelian groups is the same as the group direct product, but that the term direct sum is not used for groups which are non-Abelian.Note that direct products and direct sums differ for infinite indices. An element of the direct sum is..

Linear transformation

A linear transformation between two vector spaces and is a map such that the following hold: 1. for any vectors and in , and 2. for any scalar . A linear transformation may or may not be injective or surjective. When and have the same dimension, it is possible for to be invertible, meaning there exists a such that . It is always the case that . Also, a linear transformation always maps lines to lines (or to zero).The main example of a linear transformation is given by matrix multiplication. Given an matrix , define , where is written as a column vector (with coordinates). For example, consider(1)then is a linear transformation from to , defined by(2)When and are finite dimensional, a general linear transformation can be written as a matrix multiplication only after specifying a vector basis for and . When and have an inner product, and their vector bases, and , are orthonormal, it is easy to write the corresponding matrix . In particular, . Note that when using..

Vector basis

A vector basis of a vector space is defined as a subset of vectors in that are linearly independent and span . Consequently, if is a list of vectors in , then these vectors form a vector basis if and only if every can be uniquely written as(1)where , ..., are elements of the base field.When the base field is the reals so that for , the resulting basis vectors are -tuples of reals that span -dimensional Euclidean space . Other possible base fields include the complexes , as well as various fields of positive characteristic considered in algebra, number theory, and algebraic geometry.A vector space has many different vector bases, but there are always the same number of basis vectors in each of them. The number of basis vectors in is called the dimension of . Every spanning list in a vector space can be reduced to a basis of the vector space.The simplest example of a vector basis is the standard basis in Euclidean space , in which the basis vectors lie along each coordinate..

Orthogonal transformation

An orthogonal transformation is a linear transformation which preserves a symmetric inner product. In particular, an orthogonal transformation (technically, an orthonormal transformation) preserves lengths of vectors and angles between vectors,(1)In addition, an orthogonal transformation is either a rigid rotation or an improper rotation (a rotation followed by a flip). (Flipping and then rotating can be realized by first rotating in the reverse direction and then flipping.) Orthogonal transformations correspond to and may be represented using orthogonal matrices.The set of orthonormal transformations forms the orthogonal group, and an orthonormal transformation can be realized by an orthogonal matrix.Any linear transformation in three dimensions(2)(3)(4)satisfying the orthogonality condition(5)where Einstein summation has been used and is the Kronecker delta, is an orthogonal transformation. If is an orthogonal..


A square matrix is said to be unipotent if , where is an identity matrix is a nilpotent matrix (defined by the property that is the zero matrix for some positive integer matrix power . The corresponding identity, for some integer allows this definition to be generalized to other types of algebraic systems.An example of a unipotent matrix is a square matrix whose entries below the diagonal are zero and its entries on the diagonal are one. An explicit example of a unipotent matrix is given byOne feature of a unipotent matrix is that its matrix powers have entries which grow like a polynomial in .A semisimple element of a group is unipotent if is a p-group, where is the generalized fitting subgroup.


The socle of a group is the subgroup generated by its minimal normal subgroups. For example, the symmetric group has two nontrivial normal subgroups: and . But contains , so is the only minimal subgroup, and the socle of is .

Group orbit

In celestial mechanics, the fixed path a planet traces as it moves around the sun is called an orbit. When a group acts on a set (this process is called a group action), it permutes the elements of . Any particular element moves around in a fixed path which is called its orbit. In the notation of set theory, the group orbit of a group element can be defined as(1)where runs over all elements of the group . For example, for the permutation group , the orbits of 1 and 2 are and the orbits of 3 and 4 are .A group fixed point is an orbit consisting of a single element, i.e., an element that is sent to itself under all elements of the group. The stabilizer of an element consists of all the permutations of that produce group fixed points in , i.e., that send to itself. The stabilizers of 1 and 2 under are therefore , and the stabilizers of 3 and 4 are .Note that if then , because iff . Consequently, the orbits partition and, given a permutation group on a set , the orbit of an element is..

Group fixed point

The set of points of fixed by a group action are called the group's set of fixed points, defined byIn some cases, there may not be a group action, but a single operator . Then still makes sense even when is not invertible (as is the case in a group action).

Transitive group action

A group action is transitive if it possesses only a single group orbit, i.e., for every pair of elements and , there is a group element such that . In this case, is isomorphic to the left cosets of the isotropy group, . The space , which has a transitive group action, is called a homogeneous space when the group is a Lie group.If, for every two pairs of points and , there is a group element such that , then the group action is called doubly transitive. Similarly, a group action can be triply transitive and, in general, a group action is -transitive if every set of distinct elements has a group element such that .

Homotopy group

The homotopy groups generalize the fundamental group to maps from higher dimensional spheres, instead of from the circle. The th homotopy group of a topological space is the set of homotopy classes of maps from the n-sphere to , with a group structure, and is denoted . The fundamental group is , and, as in the case of , the maps must pass through a basepoint . For , the homotopy group is an Abelian group.The group operations are not as simple as those for the fundamental group. Consider two maps and , which pass through . The product is given by mapping the equator to the basepoint . Then the northern hemisphere is mapped to the sphere by collapsing the equator to a point, and then it is mapped to by . The southern hemisphere is similarly mapped to by . The diagram above shows the product of two spheres.The identity element is represented by the constant map . The choice of direction of a loop in the fundamental group corresponds to a manifold orientation of in a homotopy..

Homotopy class

Given two topological spaces and , place an equivalence relationship on the continuous maps using homotopies, and write if is homotopic to . Roughly speaking, two maps are homotopic if one can be deformed into the other. This equivalence relation is transitive because these homotopy deformations can be composed (i.e., one can follow the other).A simple example is the case of continuous maps from one circle to another circle. Consider the number of ways an infinitely stretchable string can be tied around a tree trunk. The string forms the first circle, and the tree trunk's surface forms the second circle. For any integer , the string can be wrapped around the tree times, for positive clockwise, and negative counterclockwise. Each integer corresponds to a homotopy class of maps from to .After the string is wrapped around the tree times, it could be deformed a little bit to get another continuous map, but it would still be in the same homotopy class,..


Two mathematical objects are said to be homotopic if one can be continuously deformed into the other. For example, the real line is homotopic to a single point, as is any tree. However, the circle is not contractible, but is homotopic to a solid torus. The basic version of homotopy is between maps. Two maps and are homotopic if there is a continuous mapsuch that and .Whether or not two subsets are homotopic depends on the ambient space. For example, in the plane, the unit circle is homotopic to a point, but not in the punctured plane . The puncture can be thought of as an obstacle.However, there is a way to compare two spaces via homotopy without ambient spaces. Two spaces and are homotopy equivalent if there are maps and such that the composition is homotopic to the identity map of and is homotopic to the identity map of . For example, the circle is not homotopic to a point, for then the constant map would be homotopic to the identity map of a circle, which is impossible..

Regular polygon

A regular polygon is an -sided polygon in which the sides are all the same length and are symmetrically placed about a common center (i.e., the polygon is both equiangular and equilateral). Only certain regular polygons are "constructible" using the classical Greek tools of the compass and straightedge.The terms equilateral triangle and square refer to the regular 3- and 4-polygons, respectively. The words for polygons with sides (e.g., pentagon, hexagon, heptagon, etc.) can refer to either regular or non-regular polygons, although the terms generally refer to regular polygons in the absence of specific wording.A regular -gon is implemented in the Wolfram Language as RegularPolygon[n], or more generally as RegularPolygon[r, n], RegularPolygon[x, y, rspec, n], etc.The sum of perpendiculars from any point to the sides of a regular polygon of sides is times the apothem.Let be the side length, be the inradius, and the circumradius..

Compact support

A function has compact support if it is zero outside of a compact set. Alternatively, one can say that a function has compact support if its support is a compact set. For example, the function in its entire domain (i.e., ) does not have compact support, while any bump function does have compact support.

Similar matrices

Two square matrices and that are related by(1)where is a square nonsingular matrix are said to be similar. A transformation of the form is called a similarity transformation, or conjugation by . For example,(2)and(3)are similar under conjugation by(4)Similar matrices represent the same linear transformation after a change of basis (for the domain and range simultaneously). Recall that a matrix corresponds to a linear transformation, and a linear transformation corresponds to a matrix after choosing a basis ,(5)Changing the basis changes the coefficients of the matrix,(6)If uses the standard basis vectors, then is the matrix using the basis vectors .

Matrix signature

A real, nondegenerate symmetric matrix , and its corresponding symmetric bilinear form , has signature if there is a nondegenerate matrix such that is a diagonal matrix with 1s and s. In this case, is a diagonal quadratic form.For example,gives a symmetric bilinear form called the Lorentzian inner product, which has signature .


A generalization of an ordinary two-dimensional surface embedded in three-dimensional space to an -dimensional surface embedded in -dimensional space. A hypersurface is therefore the set of solutions to a single equationand so it has codimension one. For instance, the -dimension hypersphere corresponds to the equation .

Orthogonal projection

A projection of a figure by parallel rays. In such a projection, tangencies are preserved. Parallel lines project to parallel lines. The ratio of lengths of parallel segments is preserved, as is the ratio of areas.Any triangle can be positioned such that its shadow under an orthogonal projection is equilateral. Also, the triangle medians of a triangle project to the triangle medians of the image triangle. Ellipses project to ellipses, and any ellipse can be projected to form a circle. The center of an ellipse projects to the center of the image ellipse. The triangle centroid of a triangle projects to the triangle centroid of its image. Under an orthogonal transformation, the Steiner inellipse can be transformed into a circle inscribed in an equilateral triangle.Spheroids project to ellipses (or circles in the degenerate case).In an orthogonal projection, any vector can be written , soand the projection matrix is a symmetric matrix iff the vector..

Complex residue

The constant in the Laurent series(1)of about a point is called the residue of . If is analytic at , its residue is zero, but the converse is not always true (for example, has residue of 0 at but is not analytic at ). The residue of a function at a point may be denoted . The residue is implemented in the Wolfram Language as Residue[f, z, z0].Two basic examples of residues are given by and for .The residue of a function around a point is also defined by(2)where is counterclockwise simple closed contour, small enough to avoid any other poles of . In fact, any counterclockwise path with contour winding number 1 which does not contain any other poles gives the same result by the Cauchy integral formula. The above diagram shows a suitable contour for which to define the residue of function, where the poles are indicated as black dots.It is more natural to consider the residue of a meromorphic one-form because it is independent of the choice of coordinate. On a Riemann..

Del bar operator

The operator is defined on a complex manifold, and is called the 'del bar operator.' The exterior derivative takes a function and yields a one-form. It decomposes as(1)as complex one-forms decompose into complexform of type(2)where denotes the direct sum. More concretely, in coordinates ,(3)and(4)These operators extend naturally to forms of higher degree. In general, if is a -complex form, then is a -form and is a -form. The equation expresses the condition of being a holomorphic function. More generally, a -complex form is called holomorphic if , in which case its coefficients, as written in a coordinate chart, are holomorphic functions.The del bar operator is also well-defined on bundle sections of a holomorphic vector bundle. The reason is because a change in coordinates or trivializations is holomorphic...

Complex form

The differential forms on decompose into forms of type , sometimes called -forms. For example, on , the exterior algebra decomposes into four types:(1)(2)where , , and denotes the direct sum. In general, a -form is the sum of terms with s and s. A -form decomposes into a sum of -forms, where .For example, the 2-forms on decompose as(3)(4)The decomposition into forms of type is preserved by holomorphic functions. More precisely, when is holomorphic and is a -form on , then the pullback is a -form on .Recall that the exterior algebra is generated by the one-forms, by wedge product and addition. Then the forms of type are generated by(5)The subspace of the complex one-forms can be identified as the -eigenspace of the almost complex structure , which satisfies . Similarly, the -eigenspace is the subspace . In fact, the decomposition of determines the almost complex structure on .More abstractly, the forms into type are a group representation of , where..

Kähler identities

A collection of identities which hold on a Kähler manifold, also called the Hodge identities. Let be a Kähler form, be the exterior derivative, where is the del bar operator, be the commutator of two differential operators, and denote the formal adjoint of . The following operators also act on differential forms on a Kähler manifold:(1)(2)(3)where is the almost complex structure, , and denotes the interior product. Then(4)(5)(6)(7)(8)(9)In addition,(10)(11)(12)(13)These identities have many implications. For instance, the two operators(14)and(15)(called Laplacians because they are elliptic operators) satisfy . At this point, assume that is also a compact manifold. Along with Hodge's theorem, this equality of Laplacians proves the Hodge decomposition. The operators and commute with these Laplacians. By Hodge's theorem, they act on cohomology, which is represented by harmonic forms. Moreover, defining(16)where..

Block matrix

A block matrix is a matrix that is defined using smallermatrices, called blocks. For example,(1)where , , , and are themselves matrices, is a block matrix. In the specific example(2)(3)(4)(5)therefore, it is the matrix(6)Block matrices can be created using ArrayFlatten.When two block matrices have the same shape and their diagonal blocks are square matrices, then they multiply similarly to matrix multiplication. For example,(7)Note that the usual rules of matrix multiplication hold even when the block matrices are not square (assuming that the block sizes correspond). Of course, matrix multiplication is in general not commutative, so in these block matrix multiplications, it is important to keep the correct order of the multiplications.When the blocks are square matrices, the set of invertible block matrices is a group isomorphic to the general linear group , where is the ring of square matrices...

Lebesgue decomposition

Any complex measure decomposes into an absolutely continuous measure and a singular measure , with respect to some positive measure . This is the Lebesgue decomposition,

Polar representation

A polar representation of a complex measure is analogous to the polar representation of a complex number as , where ,(1)The analog of absolute value is the total variation , and is replaced by a measurable real-valued function . Or sometimes one writes with instead of .More precisely, for any measurable set ,(2)where the integral is the Lebesgue integral. It is natural to extend the definition of the Lebesgue integral to complex measures using the polar representation(3)

Jordan measure decomposition

If is a real measure (i.e., a measure that takes on real values), then one can decompose it according to where it is positive and negative. The positive variation is defined by(1)where is the total variation. Similarly, the negative variation is(2)Then the Jordan decomposition of is defined as(3)When already is a positive measure then . More generally, if is absolutely continuous, i.e.,(4)then so are and . The positive and negative variations can also be written as(5)and(6)where is the decomposition of into its positive and negative parts.The Jordan decomposition has a so-called minimum property. In particular, given any positive measure , the measure has another decomposition(7)The Jordan decomposition is minimal with respect to these changes. One way to say this is that any decomposition must have and ...

Complex measure

A measure which takes values in the complex numbers. The set of complex measures on a measure space forms a vector space. Note that this is not the case for the more common positive measures. Also, the space of finite measures () has a norm given by the total variation measure , which makes it a Banach space.Using the polar representation of , it is possible to define the Lebesgue integral using a complex measure,Sometimes, the term "complex measure" is used to indicate an arbitrary measure. The definitions for measure can be extended to measures which take values in any vector space. For instance in spectral theory, measures on , which take values in the bounded linear maps from a Hilbert space to itself, represent the operator spectrum of an operator.

Compact manifold

A compact manifold is a manifold that is compact as a topological space. Examples are the circle (the only one-dimensional compact manifold) and the -dimensional sphere and torus. Compact manifolds in two dimensions are completely classified by their orientation and the number of holes (genus). It should be noted that the term "compact manifold" often implies "manifold without boundary," which is the sense in which it is used here. When there is need for a separate term, a compact boundaryless manifold is called a closed manifold.For many problems in topology and geometry, it is convenient to study compact manifolds because of their "nice" behavior. Among the properties making compact manifolds "nice" are the fact that they can be covered by finitely many coordinate charts, and that any continuous real-valued function is bounded on a compact manifold.For any positive integer , a distinct nonorientable..

Manifold tangent vector

Roughly speaking, a tangent vector is an infinitesimal displacement at a specific point on a manifold. The set of tangent vectors at a point forms a vector space called the tangent space at , and the collection of tangent spaces on a manifold forms a vector bundle called the tangent bundle.A tangent vector at a point on a manifold is a tangent vector at in a coordinate chart. A change in coordinates near causes an invertible linear map of the tangent vector's representations in the coordinates. This transformation is given by the Jacobian, which must be nonsingular in a change of coordinates. Hence the tangent vectors at are well-defined. A vector field is an assignment of a tangent vector for each point. The collection of tangent vectors forms the tangent bundle, and a vector field is a section of this bundle.Tangent vectors are used to do calculus on manifolds. Since manifolds are locally Euclidean, the usual notions of differentiation and integration..


A submersion is a smooth map whengiven that the differential, or Jacobian, is surjective at every in . The basic example of a submersion is the canonical submersion of onto when ,In fact, if is a submersion, then it is possible to find coordinates around in and coordinates around in such that is the canonical submersion written in these coordinates. For example, consider the submersion of onto the circle , given by .

Manifold orientation

An orientation on an -dimensional manifold is given by a nowhere vanishing differential n-form. Alternatively, it is an bundle orientation for the tangent bundle. If an orientation exists on , then is called orientable.Not all manifolds are orientable, as exemplified by the Möbius strip and the Klein bottle, illustrated above.However, an -dimensional submanifold of is orientable iff it has a unit normal vector field. The choice of unit determines the orientation of the submanifold. For example, the sphere is orientable.Some types of manifolds are always orientable. For instance, complex manifolds, including varieties, and also symplectic manifolds are orientable. Also, any unoriented manifold has a double cover which is oriented.A map between oriented manifolds of the same dimension is called orientation preserving if the volume form on pulls back to a positive volume form on . Equivalently, the differential maps an oriented..

Simple function

A simple function is a finite sum , where the functions are characteristic functions on a set . Another description of a simple function is a function that takes on finitely many values in its range.The collection of simple functions is closed under addition and multiplication. In addition, it is easy to integrate a simple function. By approximating a given function by simple functions, the Lebesgue integral of can be calculated.

Form integration

A differential k-form can be integrated on an -dimensional manifold. The basic example is an -form in the open unit ball in . Since is a top-dimensional form, it can be written and so(1)where the integral is the Lebesgue integral.On a manifold covered by coordinate charts , there is a partition of unity such that 1. has support in and 2. . Then(2)where the right-hand side is well-defined because each integration takes place in a coordinate chart. The integral of the -form is well-defined because, under a change of coordinates , the integral transforms according to the determinant of the Jacobian, while an -form pulls back by the determinant of the Jacobian. Hence,(3)is the same integral in either coordinate chart.For example, it is possible to integrate the 2-form(4)on the sphere . Because a point has measure zero, it is enough to integrate on , which can be covered by stereographic projection . Since(5)the pullback map of is(6)the integral of on..

Free abelian group

A free Abelian group is a group with a subset which generates the group with the only relation being . That is, it has no group torsion. All such groups are a direct product of the integers , and have rank given by the number of copies of . For example, is a free Abelian group of rank 2. A minimal subset , ..., that generates a free Abelian group is called a basis, and gives asA free Abelian group is an Abelian group, but is not a free group (except when it has rank one, i.e., ). Free Abelian groups are the free modules in the case when the ring is the ring of integers .

Transitive group

Transitivity is a result of the symmetry in the group. A group is called transitive if its group action (understood to be a subgroup of a permutation group on a set ) is transitive. In other words, if the group orbit is equal to the entire set for some element , then is transitive.A group is called k-transitive if there exists a set of elements on which the group acts faithfully and -transitively. It should be noted that transitivity computed from a particular permutation representation may not be the (maximal) transitivity of the abstract group. For example, the Higman-Sims group has both a 2-transitive representation of degree 176, and a 1-transitive representation of degree 100. Note also that while -transitivity of groups is related to -transitivity of graphs, they are not identical concepts.The symmetric group is -transitive and the alternating group is -transitive. However, multiply transitive finite groups are rare. In fact, they have..

Commutator subgroup

The commutator subgroup (also called a derived group) of a group is the subgroup generated by the commutators of its elements, and is commonly denoted or . It is the unique smallest normal subgroup of such that is Abelian (Rose 1994, p. 59). It can range from the identity subgroup (in the case of an Abelian group) to the whole group. Note that not every element of the commutator subgroup is necessarily a commutator.For instance, in the quaternion group (, , , ) with eight elements, the commutators form the subgroup . The commutator subgroup of the symmetric group is the alternating group. The commutator subgroup of the alternating group is the whole group . When , is a simple group and its only nontrivial normal subgroup is itself. Since is a nontrivial normal subgroup, it must be .The first homology of a group is the Abelianization..

Isotropy group

Some elements of a group acting on a space may fix a point . These group elements form a subgroup called the isotropy group, defined byFor example, consider the group of all rotations of a sphere . Let be the north pole . Then a rotation which does not change must turn about the usual axis, leaving the north pole and the south pole fixed. These rotations correspond to the action of the circle group on the equator.When two points and are on the same group orbit, say , then the isotropy groups are conjugate subgroups. More precisely, . In fact, any subgroup conjugate to occurs as an isotropy group to some point on the same orbit as .

Hilbert symbol

For any two nonzero p-adic numbers and , the Hilbert symbol is defined as(1)If the -adic field is not clear, it is said to be the Hilbert symbol of and relative to . The field can also be the reals (). The Hilbert symbol satisfies the following formulas: 1. . 2. for any . 3. . 4. . 5. . 6. . The Hilbert symbol depends only the values of and modulo squares. So the symbol is a map .Hilbert showed that for any two nonzero rational numbers and , 1. for almost every prime . 2. where ranges over every prime, including corresponding to the reals.

Local class field theory

The study of number fields by embedding them in a local field is called local class field theory. Information about an equation in a local field may give information about the equation in a global field, such as the rational numbers or a number field (e.g., the Hasse principle).Local class field theory is termed "local" because the local fields are localized at a prime ideal in the ring of algebraic integers. The methods of using class fields have developed over the years, from the Legendre symbol, to the group characters of Abelian extensions of a number field, and is applied to local fields.

Generalized vandermonde matrix

A generalized Vandermonde matrix of two sequences and where is an increasing sequence of positive integers and is an increasing sequence of nonnegative integers of the same length is the outer product of and with multiplication operation given by the power function. The generalized Vandermonde matrix can be implemented in the Wolfram Language as Vandermonde[a_List?VectorQ, b_List?VectorQ] := Outer[Power, a, b] /; Equal @@ Length /@ {a, b}A generalized Vandermonde matrix is a minor of a Vandermonde matrix. Alternatively, it has the same form as a Vandermonde matrix , where is an increasing sequence of positive integers, except now is any increasing sequence of nonnegative integers. In the special case of a Vandermonde matrix, .While there is no general formula for the determinant of a generalized Vandermonde matrix, its determinant is always positive. Since any minor of a generalized Vandermonde matrix is also a generalized Vandermonde..

Unitary matrix

A square matrix is a unitary matrix if(1)where denotes the conjugate transpose and is the matrix inverse. For example,(2)is a unitary matrix.Unitary matrices leave the length of a complex vectorunchanged.For real matrices, unitary is the same as orthogonal. In fact, there are some similarities between orthogonal matrices and unitary matrices. The rows of a unitary matrix are a unitary basis. That is, each row has length one, and their Hermitian inner product is zero. Similarly, the columns are also a unitary basis. In fact, given any unitary basis, the matrix whose rows are that basis is a unitary matrix. It is automatically the case that the columns are another unitary basis.A matrix can be tested to see if it is unitary using the Wolfram Language function: UnitaryQ[m_List?MatrixQ] := ([email protected] @ m . m == IdentityMatrix @ Length @ m)The definition of a unitary matrix guarantees that(3)where is the identity matrix. In particular,..

Orthogonal matrix

A matrix is an orthogonal matrix if(1)where is the transpose of and is the identity matrix. In particular, an orthogonal matrix is always invertible, and(2)In component form,(3)This relation make orthogonal matrices particularly easy to compute with, since the transpose operation is much simpler than computing an inverse.For example,(4)(5)are orthogonal matrices. A matrix can be tested to see if it is orthogonal using the Wolfram Language code: OrthogonalMatrixQ[m_List?MatrixQ] := (Transpose[m].m == IdentityMatrix @ Length @ m)The rows of an orthogonal matrix are an orthonormal basis. That is, each row has length one, and are mutually perpendicular. Similarly, the columns are also an orthonormal basis. In fact, given any orthonormal basis, the matrix whose rows are that basis is an orthogonal matrix. It is automatically the case that the columns are another orthonormal basis.The orthogonal matrices are precisely those matrices..

Symplectic group

For every even dimension , the symplectic group is the group of matrices which preserve a nondegenerate antisymmetric bilinear form , i.e., a symplectic form.Every symplectic form can be put into a canonical form by finding a symplectic basis. So, up to conjugation, there is only one symplectic group, in contrast to the orthogonal group which preserves a nondegenerate symmetric bilinear form. As with the orthogonal group, the columns of a symplectic matrix form a symplectic basis.Since is a volume form, the symplectic group preserves volume and vector space orientation. Hence, . In fact, is just the group of matrices with determinant 1. The three symplectic (0,1)-matrices are therefore(1)The matrices(2)and(3)are in , where(4)In fact, both of these examples are 1-parameter subgroups.A matrix can be tested to see if it is symplectic using the WolframLanguage code: SymplecticForm[n_Integer] := Join[PadLeft[IdentityMatrix[n], {n,..

Matrix minimal polynomial

The minimal polynomial of a matrix is the monic polynomial in of smallest degree such that(1)The minimal polynomial divides any polynomial with and, in particular, it divides the characteristic polynomial.If the characteristic polynomial factorsas(2)then its minimal polynomial is given by(3)for some positive integers , where the satisfy .For example, the characteristic polynomial of the zero matrix is , whiles its minimal polynomial is . However, the characteristic polynomial and minimal polynomial of(4)are both .The following Wolfram Language code will find the minimal polynomial for the square matrix in the variable . MatrixMinimalPolynomial[a_List?MatrixQ,x_]:=Module[ { i, n=1, qu={}, mnm={Flatten[IdentityMatrix[Length[a]]]} }, While[Length[qu]==0, AppendTo[mnm,Flatten[MatrixPower[a,n]]]; qu=NullSpace[Transpose[mnm]]; n++ ]; First[qu].Table[x^i,{i,0,n-1}] ]..

Companion matrix

The companion matrix to a monic polynomial(1)is the square matrix(2)with ones on the subdiagonal and the last column given by the coefficients of . Note that in the literature, the companion matrix is sometimes defined with the rows and columns switched, i.e., the transpose of the above matrix.When is the standard basis, a companion matrix satisfies(3)for , as well as(4)including(5)The matrix minimal polynomial of the companion matrix is therefore , which is also its characteristic polynomial.Companion matrices are used to write a matrix in rational canonical form. In fact, any matrix whose matrix minimal polynomial has polynomial degree is similar to the companion matrix for . The rational canonical form is more interesting when the degree of is less than .The following Wolfram Language command gives the companion matrix for a polynomial in the variable . CompanionMatrix[p_, x_] := Module[ {n, w = CoefficientList[p, x]}, w = -w/Last[w];..

Special unitary matrix

A square matrix is a special unitary matrix if(1)where is the identity matrix and is the conjugate transpose matrix, and the determinant is(2)The first condition means that is a unitary matrix, and the second condition provides a restriction beyond a general unitary matrix, which may have determinant for any real number. For example,(3)is a special unitary matrix. A matrix can be tested to see if it is a special unitary matrix using the Wolfram Language function SpecialUnitaryQ[m_List?MatrixQ] := (Conjugate @ Transpose @ m . m == IdentityMatrix @ Length @ m&& Det[m] == 1)The special unitary matrices are closed under multiplication and the inverse operation, and therefore form a matrix group called the special unitary group .

Block diagonal matrix

A block diagonal matrix, also called a diagonal block matrix, is a square diagonal matrix in which the diagonal elements are square matrices of any size (possibly even ), and the off-diagonal elements are 0. A block diagonal matrix is therefore a block matrix in which the blocks off the diagonal are the zero matrices, and the diagonal matrices are square.Block diagonal matrices can be constructed out of submatrices in the WolframLanguage using the following code snippet: BlockDiagonalMatrix[b : {__?MatrixQ}] := Module[{r, c, n = Length[b], i, j}, {r, c} = Transpose[Dimensions /@ b]; ArrayFlatten[ Table[If[i == j, b[[i]], ConstantArray[0, {r[[i]], c[[j]]}]], {i, n}, {j, n} ] ] ]

Special orthogonal matrix

A square matrix is a special orthogonal matrix if(1)where is the identity matrix, and the determinant satisfies(2)The first condition means that is an orthogonal matrix, and the second restricts the determinant to (while a general orthogonal matrix may have determinant or ). For example,(3)is a special orthogonal matrix since(4)and its determinant is . A matrix can be tested to see if it is a special orthogonal matrix using the Wolfram Language code SpecialOrthogonalQ[m_List?MatrixQ] := (Transpose[m] . m == IdentityMatrix @ Length @ m&& Det[m] == 1)The special orthogonal matrices are closed under multiplication and the inverse operation, and therefore form a matrix group called the special orthogonal group .

Antisymmetric matrix

An antisymmetric matrix is a square matrix thatsatisfies the identity(1)where is the matrix transpose. For example,(2)is antisymmetric. Antisymmetric matrices are commonly called "skew symmetric matrices" by mathematicians.A matrix may be tested to see if it is antisymmetric using the Wolfram Language function AntisymmetricQ[m_List?MatrixQ] := (m === -Transpose[m])In component notation, this becomes(3)Letting , the requirement becomes(4)so an antisymmetric matrix must have zeros on its diagonal. The general antisymmetric matrix is of the form(5)Applying to both sides of the antisymmetry condition gives(6)Any square matrix can be expressed as the sum of symmetric and antisymmetric parts. Write(7)But(8)(9)so(10)which is symmetric, and(11)which is antisymmetric.All antisymmetric matrices of odd dimension are singular. This follows from the fact that(12)So, by the properties of determinants,(13)(14)Therefore,..

Antihermitian matrix

A square matrix is antihermitian if it satisfies(1)where is the adjoint. For example, the matrix(2)is an antihermitian matrix. Antihermitian matrices are often called "skew Hermitian matrices" by mathematicians.A matrix can be tested to see if it is antihermitian using the Wolfram Language function AntihermitianQ[m_List?MatrixQ] := (m === -Conjugate[Transpose[m]])The set of antihermitian matrices is a vector space, and the commutator(3)of two antihermitian matrices is antihermitian. Hence, the antihermitian matrices are a Lie algebra, which is related to the Lie group of unitary matrices. In particular, suppose is a path of unitary matrices through , i.e.,(4)for all , where is the adjoint and is the identity matrix. The derivative at of both sides must be equal so(5)That is, the derivative of at the identity must be antihermitian.The matrix exponential map of an antihermitianmatrix is a unitary matrix...

Affine variety

An affine variety is an algebraic variety contained in affine space. For example,(1)is the cone, and(2)is a conic section, which is a subvariety of the cone. The cone can be written to indicate that it is the variety corresponding to . Naturally, many other polynomials vanish on , in fact all polynomials in . The set is an ideal in the polynomial ring . Note also, that the ideal of polynomials vanishing on the conic section is the ideal generated by and .A morphism between two affine varieties is given by polynomial coordinate functions. For example, the map is a morphism from to . Two affine varieties are isomorphic if there is a morphism which has an inverse morphism. For example, the affine variety is isomorphic to the cone via the coordinate change .Many polynomials may be factored, for instance , and then . Consequently, only irreducible polynomials, and more generally only prime ideals are used in the definition of a variety. An affine variety is..

Spherical distance

The spherical distance between two points and on a sphere is the distance of the shortest path along the surface of the sphere (paths that cut through the interior of the sphere are not allowed) from to , which always lies along a great circle.For points and on the unit sphere, the spherical distance is given bywhere denotes a dot product.


A geodesic is a locally length-minimizing curve. Equivalently, it is a path that a particle which is not accelerating would follow. In the plane, the geodesics are straight lines. On the sphere, the geodesics are great circles (like the equator). The geodesics in a space depend on the Riemannian metric, which affects the notions of distance and acceleration.Geodesics preserve a direction on a surface (Tietze 1965, pp. 26-27) and have many other interesting properties. The normal vector to any point of a geodesic arc lies along the normal to a surface at that point (Weinstock 1974, p. 65).Furthermore, no matter how badly a sphere is distorted, there exist an infinite number of closed geodesics on it. This general result, demonstrated in the early 1990s, extended earlier work by Birkhoff, who proved in 1917 that there exists at least one closed geodesic on a distorted sphere, and Lyusternik and Schnirelmann, who proved in 1923 that..

Discrete topology

A topology is given by a collection of subsets of a topological space . The smallest topology has two open sets, the empty set and . The largest topology contains all subsets as open sets, and is called the discrete topology. In particular, every point in is an open set in the discrete topology.

Point lattice

A point lattice is a regularly spaced array of points.In the plane, point lattices can be constructed having unit cells in the shape of a square, rectangle, hexagon, etc. Unless otherwise specified, point lattices may be taken to refer to points in a square array, i.e., points with coordinates , where , , ... are integers. Such an array is often called a grid or a mesh.Point lattices are frequently simply called "lattices," which unfortunately conflicts with the same term applied to ordered sets treated in lattice theory. Every "point lattice" is a lattice under the ordering inherited from the plane, although a point lattice may not be a sublattice of the plane, since the infimum operation in the plane need not agree with the infimum operation in the point lattice. On the other hand, many lattices are not point lattices.Properties of lattice are implemented in the Wolfram Language as LatticeData[lattice, prop].Formally,..

Generalized mobile automaton

A generalized mobile automaton is a generalization of the mobile automaton in which the automaton may have more than one active cell. Generalized mobile automata allow for more change in a single update, so interesting behavior can develop more rapidly. Like cellular automata, the generalized mobile automata can involve parallel computing. During an updating event, every active cell is updated based on the value of that cell and its neighbors. The update determines the new color for the active cell, and specifies which if any of it and its neighbors become active cells. A cell becomes active if any of the previous step's events determined that it should become active. An example is shown above (Wolfram 2002, p. 76).Its rule structure allows for the creation and destruction of the active cells, but only updates values of the active cells. This way there is no overlap, even if neighboring cells are active. A cellular automaton is a special..

Sequential substitution system

A sequential substitution system is a substitution system in which a string is scanned from left to right for the first occurrence of the first rule pattern. If the pattern is found, the rule is applied and processing advances to the next step. If the first rule pattern does not occur, the string is scanned from left to right for the first occurrence of the second rule pattern. If the pattern is found, the rule is applied and processing advances to the next step, and so on. If none of the rule patterns match at some step, the string repeats indefinitely for all subsequent steps. For example, consider the single rule and the initial string , illustrated above. The first step yields , the second step yields , and from there on the system repeats since there are no more matches of the pattern rule.A more interesting sequential substitution system is illustrated above (Wolfram 2002, p. 90). This system has two rules and the initial condition . It builds..

Cyclic tag system

A tag system in which a list of tag rules (each of a special form) is applied to a system in sequential order and then starting again from the first rule. In a cyclic tag system, each set of tag rules has the special structure that a pattern is appended if (and only if) the first element of the current pattern is a 1 and that independent of whether the first element is 0 or 1, the first element is then deleted.For example, consider a state consisting of white and black cells, labeled 0 and 1, respectively, and the cyclic tag system and with initial state , illustrated above. As required, this system always removes the first element and appends specific patterns iff the first cell is black. 1. At the first step, the leftmost element is 1, so applying the first rule gives , since is appended, and the initial is then deleted. 2. Applying the second rule adds at the end and removes the first element, yielding . 3. Again applying the first rule adds at the end and (as always)..


A reduction system is called confluent (or globally confluent) if, for all , , and such that and , there exists a such that and . A reduction system is said to be locally confluent if, for all , , such that and , there exists a such that and . Here, the notation indicates that is reduced to in one step, and indicates that is reduced to in zero or more steps.A reduction system is confluent iff it has Church-Rosser property (Wolfram 2002, p. 1036). In finitely terminating reduction systems, global and local confluence are equivalent, for instance in the systems shown above. Reduction systems that are both finitely terminating and confluent are called convergent. In a convergent reduction system, unique normal forms exist for all expressions.The problem of determining whether a given reduction system is confluent is recursivelyundecidable.The property of being confluent is called confluence. Confluence is a necessary conditionfor causal..

Rule 94

Rule 94 is one of the elementary cellular automaton rules introduced by Stephen Wolfram in 1983 (Wolfram 1983, 2002). It specifies the next color in a cell, depending on its color and its immediate neighbors. Its rule outcomes are encoded in the binary representation . This rule is illustrated above together with the evolution of a single black cell it produces after 15 steps (Wolfram 2002, p. 55).Rule 94 is amphichiral, and its complement is 133.Starting with a single black cell, successive generations , 1, ... are given by interpreting the numbers 1, 7, 27, 119, 427, 1879, 6827, 30039, ... (OEIS A118101) in binary, namely 1, 111, 11011, 1110111, 110101011, ... (OEIS A118102). A formula for the the term is given by(1)(E. W. Weisstein, Apr. 12, 2006), so computation of rule 94 is computationally reducible for evolution from a single black cell, in which case it has generating function(2)Rule 94 is capable of exhibiting..

Additive cellular automaton

An additive cellular automaton is a cellular automaton whose rule is compatible with an addition of states. Typically, this addition is derived from modular arithmetic. Additive rules allow the evolution for different initial conditions to be computed independently, then the results combined by simply adding. The results for arbitrary starting conditions can therefore be computed very efficiently by convolving the evolution of a single cell with an appropriate convolution kernel (which, in the case of two-color automata, would correspond to the set of initially "active" cells).A simple example of an additive cellular automaton is provided by the rule 90 elementary cellular automaton. As can be seen from the graphical representation of this rule, the rule as a function of left, central, and right neighbors is simply given by the sum of the rules for the left and right neighbors taken modulo 2, where white cells are assigned the..

Total variation

Given a complex measure , there exists a positive measure denoted which measures the total variation of , also sometimes called simply "total variation." In particular, on a subset is the largest sum of "variations" for any subdivision of . Roughly speaking, a total variation measure is an infinitesimal version of the absolute value.More precisely,(1)where the supremum is taken over all partitions of into measurable subsets .Note that may not be the same as . When already is a positive measure, then . More generally, if is absolutely continuous, that is(2)then so is , and the total variation measure can be written as(3)The total variation measure can be used to rewrite the original measure, in analogy to the norm of a complex number. The measure has a polar representation(4)with ...

Singular measure

Two complex measures and on a measure space , are mutually singular if they are supported on different subsets. More precisely, where and are two disjoint sets such that the following hold for any measurable set , 1. The sets and are measurable. 2. The total variation of is supported on and that of on , i.e.,The relation of two measures being singular, written as , is plainly symmetric. Nevertheless, it is sometimes said that " is singular with respect to ."A discrete singular measure (with respect to Lebesgue measure on the reals) is a measure supported at 0, say iff . In general, a measure is concentrated on a subset if . For instance, the measure above is concentrated at 0.

Measurable function

A function is measurable if, for every real number , the setis measurable. When with Lebesgue measure, or more generally any Borel measure, then all continuous functions are measurable. In fact, practically any function that can be described is measurable. Measurable functions are closed under addition and multiplication, but not composition.The measurable functions form one of the most general classes of real functions. They are one of the basic objects of study in analysis, both because of their wide practical applicability and the aesthetic appeal of their generality. Whether a function is measurable depends on the measure on , and, in particular, it only depends on the sigma-algebra of measurable sets in . Sometimes, the measure on may be assumed to be a standard measure. For instance, a measurable function on is usually measurable with respect to Lebesgue measure.From the point of view of measure theory, subsets with measure zero do..

Essential supremum

The essential supremum is the proper generalization to measurable functions of the maximum. The technical difference is that the values of a function on a set of measure zero don't affect the essential supremum.Given a measurable function , where is a measure space with measure , the essential supremum is the smallest number such that the sethas measure zero. If no such number exists, as in the case of on , then the essential supremum is .The essential supremum of the absolute value of a function is usually denoted , and this serves as the norm for L-infty-space.

Riesz representation theorem

There are a couple of versions of this theorem. Basically, it says that any bounded linear functional on the space of compactly supported continuous functions on is the same as integration against a measure ,Here, the integral is the Lebesgue integral.Because linear functionals form a vector space, and are not "positive," the measure may not be a positive measure. But if the functional is positive, in the sense that implies that , then the measure is also positive. In the generality of complex linear functionals, the measure is a complex measure. The measure is uniquely determined by and has the properties of a regular Borel measure. It must be a finite measure, which corresponds to the boundedness condition on the functional. In fact, the operator norm of , , is the total variation measure of , .Naturally, there are some hypotheses necessary for this to make sense. The space has to be locally compact and a T2-Space, which is not a strong..

Submanifold tangent space

The tangent plane to a surface at a point is the tangent space at (after translating to the origin). The elements of the tangent space are called tangent vectors, and they are closed under addition and scalar multiplication. In particular, the tangent space is a vector space.Any submanifold of Euclidean space, and more generally any submanifold of an abstract manifold, has a tangent space at each point. The collection of tangent spaces to forms the tangent bundle . A vector field assigns to every point a tangent vector in the tangent space at .There are two ways of defining a submanifold, and each way gives rise to a different way of defining the tangent space. The first way uses a parameterization, and the second way uses a system of equations.Suppose that is a local parameterization of a submanifold in Euclidean space . Say,(1)where is the open unit ball in , and . At the point , the tangent space is the image of the Jacobian of , as a linear transformation..


A manifold is a topological space that is locally Euclidean (i.e., around every point, there is a neighborhood that is topologically the same as the open unit ball in ). To illustrate this idea, consider the ancient belief that the Earth was flat as contrasted with the modern evidence that it is round. The discrepancy arises essentially from the fact that on the small scales that we see, the Earth does indeed look flat. In general, any object that is nearly "flat" on small scales is a manifold, and so manifolds constitute a generalization of objects we could live on in which we would encounter the round/flat Earth problem, as first codified by Poincaré.More concisely, any object that can be "charted" is a manifold.One of the goals of topology is to find ways of distinguishing manifolds. For instance, a circle is topologically the same as any closed loop, no matter how different these two manifolds may appear. Similarly,..

Chart tangent space

From the point of view of coordinate charts, the notion of tangent space is quite simple. The tangent space consists of all directions, or velocities, a particle can take. In an open set in there are no constraints, so the tangent space at a point is another copy of . The set could be a coordinate chart for an -dimensional manifold.The tangent space at , denoted , is the set of possible velocity vectors of paths through . Hence there is a canonical vector basis: if are the coordinates, then are a basis for the tangent space, where is the velocity vector of a particle with unit speed moving inward along the coordinate . The collection of all tangent vectors to every point on the manifold, called the tangent bundle, is the phase space of a single particle moving in the manifold .It seems as if the tangent space at is the same as the tangent space at all other points in the chart . However, while they do share the same dimension and are isomorphic, in a change of coordinates,..

Smooth structure

A smooth structure on a topological manifold (also called a differentiable structure) is given by a smooth atlas of coordinate charts, i.e., the transition functions between the coordinate charts are smooth. A manifold with a smooth structure is called a smooth manifold (or differentiable manifold).A smooth structure is used to define differentiability for real-valued functions on a manifold. This extends to a notion of when a map between two differentiable manifolds is smooth, and naturally to the definition of a diffeomorphism. In addition, the smooth structure is used to define manifold tangent vectors, the collection of which is the tangent bundle.Two smooth structures are considered equivalent if there is a homeomorphism of the manifold which pulls back one atlas to an atlas compatible to the other one, i.e., a diffeomorphism. For instance, any two smooth structures on the circle are equivalent, as can be seen by integration.It is..

Kähler structure

A Kähler structure on a complex manifold combines a Riemannian metric on the underlying real manifold with the complex structure. Such a structure brings together geometry and complex analysis, and the main examples come from algebraic geometry. When has complex dimensions, then it has real dimensions. A Kähler structure is related to the unitary group , which embeds in as the orthogonal matrices that preserve the almost complex structure (multiplication by ''). In a coordinate chart, the complex structure of defines a multiplication by and the metric defines orthogonality for tangent vectors. On a Kähler manifold, these two notions (and their derivatives) are related.The following are elements of a Kähler structure, with each condition sufficientfor a Kähler structure to exist.1. A Kähler metric. Near any point , there exists holomorphic coordinates such that the metric has the form(1)where denotes..

Kähler manifold

A complex manifold for which the exterior derivative of the fundamental form associated with the given Hermitian metric vanishes, so . In other words, it is a complex manifold with a Kähler structure. It has a Kähler form, so it is also a symplectic manifold. It has a Kähler metric, so it is also a Riemannian manifold.The simplest example of a Kähler manifold is a Riemann surface, which is a complex manifold of dimension 1. In this case, the imaginary part of any Hermitian metric must be a closed form since all 2-forms are closed on a two real dimensional manifold.

Intrinsic tangent space

The tangent space at a point in an abstract manifold can be described without the use of embeddings or coordinate charts. The elements of the tangent space are called tangent vectors, and the collection of tangent spaces forms the tangent bundle.One description is to put an equivalence relation on smooth paths through the point . More precisely, consider all smooth maps where and . We say that two maps and are equivalent if they agree to first order. That is, in any coordinate chart around , . If they are similar in one chart then they are similar in any other chart, by the chain rule. The notion of agreeing to first order depends on coordinate charts, but this cannot be completely eliminated since that is how manifolds are defined.Another way is to first define a vector field as a derivation of the ring of smooth functions . Then a tangent vector at a point is an equivalence class of vector fields which agree at . That is, if for every smooth function . Of course,..

Harmonic map

A map , between two compact Riemannian manifolds, is a harmonic map if it is a critical point for the energy functionalThe norm of the differential is given by the metric on and and is the measure on . Typically, the class of allowable maps lie in a fixed homotopy class of maps.The Euler-Lagrange differential equation for the energy functional is a non-linear elliptic partial differential equation. For example, when is the circle, then the Euler-Lagrange equation is the same as the geodesic equation. Hence, is a closed geodesic iff is harmonic. The map from the circle to the equator of the standard 2-sphere is a harmonic map, and so are the maps that take the circle and map it around the equator times, for any integer . Note that these all lie in the same homotopy class. A higher-dimensional example is a meromorphic function on a compact Riemann surface, which is a harmonic map to the Riemann sphere.A harmonic map may not always exist in a homotopy class,..


The Grassmannian is the set of -dimensional subspaces in an -dimensional vector space. For example, the set of lines is projective space. The real Grassmannian (as well as the complex Grassmannian) are examples of manifolds. For example, the subspace has a neighborhood . A subspace is in if and and . Then for any , the vectors and are uniquely determined by requiring and . The other six entries provide coordinates for .In general, the Grassmannian can be given coordinates in a similar way at a point . Let be the open set of -dimensional subspaces which project onto . First one picks an orthonormal basis for such that span . Using this basis, it is possible to take any vectors and make a matrix. Doing this for the basis of , another -dimensional subspace in , gives a -matrix, which is well-defined up to linear combinations of the rows. The final step is to row-reduce so that the first block is the identity matrix. Then the last block is uniquely determined by ...


An atlas is a collection of consistent coordinate charts on a manifold, where "consistent" most commonly means that the transition functions of the charts are smooth. As the name suggests, an atlas corresponds to a collection of maps, each of which shows a piece of a manifold and looks like flat Euclidean space. To use an atlas, one needs to know how the maps overlap. To be useful, the maps must not be too different on these overlapping areas.The overlapping maps from one chart to another are called transition functions. They represent the transition from one chart's point of view to that of another. Let the open unit ball in be denoted . Then if and are two coordinate charts, the composition is a function defined on . That is, it is a function from an open subset of to , and given such a function from to , there are conditions for it to be smooth or have smooth derivatives (i.e., it is a C-k function). Furthermore, when is isomorphic to (in the even dimensional..

Kähler metric

A Kähler metric is a Riemannian metric on a complex manifold which gives a Kähler structure, i.e., it is a Kähler manifold with a Kähler form. However, the term "Kähler metric" can also refer to the corresponding Hermitian metric , where is the Kähler form, defined by . Here, the operator is the almost complex structure, a linear map on tangent vectors satisfying , induced by multiplication by . In coordinates , the operator satisfies and .The operator depends on the complex structure, and on a Kähler manifold, it must preserve the Kähler metric. For a metric to be Kähler, one additional condition must also be satisfied, namely that it can be expressed in terms of the metric and the complex structure. Near any point , there exists holomorphic coordinates such that the metric has the formwhere denotes the vector space tensor product; that is, it vanishes up to order two at . Hence, any geometric..

Velocity vector

The idea of a velocity vector comes from classical physics. By representing the position and motion of a single particle using vectors, the equations for motion are simpler and more intuitive. Suppose the position of a particle at time is given by the position vector . Then the velocity vector is the derivative of the position,For example, suppose a particle is confined to the plane and its position is given by . Then it travels along the unit circle at constant speed. Its velocity vector is . In a diagram, it makes sense to translate the velocity vector so it originates at . In particular, it is drawn as an arrow from to .Another example is a particle traveling along a hyperbola specified parametrically by . Its velocity vector is then given by , illustrated above.Travel down the same path, but using a different function is called a reparameterization, and the chain rule describes the change in velocity. For example, the hyperbola can also be parametrized..

Residue field

In a local ring , there is only one maximal ideal . Hence, has only one quotient ring which is a field. This field is called the residue field.

Module direct sum

The direct sum of modules and is the module(1)where all algebraic operations are defined componentwise. In particular, suppose that and are left -modules, then(2)and(3)where is an element of the ring . The direct sum of an arbitrary family of modules over the same ring is also defined. If is the indexing set for the family of modules, then the direct sum is represented by the collection of functions with finite support from to the union of all these modules such that the function sends to an element in the module indexed by .The dimension of a direct sum is the sum of the dimensions of the quantities summed. The significant property of the direct sum is that it is the coproduct in the category of modules. This general definition gives as a consequence the definition of the direct sum of Abelian groups and (since they are -modules, i.e., modules over the integers) and the direct sum of vector spaces (since they are modules over a field). Note that the direct..

Vector space tensor product

The tensor product of two vector spaces and , denoted and also called the tensor direct product, is a way of creating a new vector space analogous to multiplication of integers. For instance,(1)In particular,(2)Also, the tensor product obeys a distributive law with the directsum operation:(3)The analogy with an algebra is the motivation behind K-theory. The tensor product of two tensors and can be implemented in the Wolfram Language as: TensorProduct[a_List, b_List] := Outer[List, a, b]Algebraically, the vector space is spanned by elements of the form , and the following rules are satisfied, for any scalar . The definition is the same no matter which scalar field is used.(4)(5)(6)One basic consequence of these formulas is that(7)A vector basis of and of gives a basis for , namely , for all pairs . An arbitrary element of can be written uniquely as , where are scalars. If is dimensional and is dimensional, then has dimension .Using tensor products,..


A differential k-form of degree in an exterior algebra is decomposable if there exist one-forms such that(1)where denotes a wedge product. Forms of degree 0, 1, , and are always decomposable. Hence the first instance of indecomposable forms occurs in , in which case is indecomposable.If a -form has a form envelope of dimension then it is decomposable. In fact, the one-forms in the (dual) basis to the envelope can be used as the above.Plücker's equations form a system of quadratic equations on the in(2)which is equivalent to being decomposable. Since a decomposable -form corresponds to a -dimensional subspace, these quadratic equations show that the Grassmannian is a projective algebraic variety. In particular, is decomposable if for every ,(3)where denotes tensor contraction and is the dual vector space to ...


A p-form is indecomposable if it cannot be written as the wedge product of one-formsA -form that can be written as such a product is called decomposable.


Homology is a concept that is used in many branches of algebra and topology. Historically, the term "homology" was first used in a topological sense by Poincaré. To him, it meant pretty much what is now called a bordism, meaning that a homology was thought of as a relation between manifolds mapped into a manifold. Such manifolds form a homology when they form the boundary of a higher-dimensional manifold inside the manifold in question.To simplify the definition of homology, Poincaré simplified the spaces he dealt with. He assumed that all the spaces he dealt with had a triangulation (i.e., they were "simplicial complexes"). Then instead of talking about general "objects" in these spaces, he restricted himself to subcomplexes, i.e., objects in the space made up only on the simplices in the triangulation of the space. Eventually, Poincaré's version of homology was dispensed with and..

Chain homotopy

Suppose and are two chain homomorphisms. Then a chain homotopy is given by a sequence of mapssuch thatwhere denotes the boundary operator.

Short exact sequence

A short exact sequence of groups , , and is given by two maps and and is written(1)Because it is an exact sequence, is injective, and is surjective. Moreover, the group kernel of is the image of . Hence, the group can be considered as a (normal) subgroup of , and is isomorphic to .A short exact sequence is said to split if there is a map such that is the identity on . This only happens when is the direct product of and .The notion of a short exact sequence also makes sense for modules and sheaves. Given a module over a unit ring , all short exact sequences(2)are split iff is projective, and all short exact sequences(3)are split iff is injective.A short exact sequence of vector spaces is alwayssplit.

Chain homomorphism

Also called a chain map. Given two chain complexes and , a chain homomorphism is given by homomorphisms such thatwhere and are the boundary operators.

Chain homology

For every , the kernel of is called the group of cycles,(1)The letter is short for the German word for cycle, "Zyklus." The image is contained in the group of cycles because , and is called the group of boundaries,(2)The quotients are the homology groups of the chain.Given a short exact sequence of chaincomplexes(3)there is a long exact sequence in homology.(4)In particular, a cycle in with , is mapped to a cycle in . Similarly, a boundary in gets mapped to a boundary in . Consequently, the map between homologies is well-defined. The only map which is not that obvious is , called the connecting homomorphism, which is well-defined by the snake lemma.Proofs of this nature are (with a modicum of humor) referred to as diagramchasing.

Chain equivalence

Chain equivalences give an equivalence relation on the space of chain homomorphisms. Two chain complexes are chain equivalent if there are chain maps and such that is chain homotopic to the identity on and is chain homotopic to the identity on .

Chain complex

A chain complex is a sequence of maps(1)where the spaces may be Abelian groups or modules. The maps must satisfy . Making the domain implicitly understood, the maps are denoted by , called the boundary operator or the differential. Chain complexes are an algebraic tool for computing or defining homology and have a variety of applications. A cochain complex is used in the case of cohomology.Elements of are called chains. For each , the kernel of is called the group of cycles,(2)The letter is short for the German word for cycle, "Zyklus." The image is contained in the group of cycles because . It is called the group of boundaries.(3)The quotients are the homology groups of the chain.For example, the sequence(4)where every space is and each map is given by multiplication by 4 is a chain complex. The cycles at each stage are and the boundaries are . So the homology at each stage is the group of two elements . A simpler example is given by a linear transformation..

Form envelope

Given a differential p-form in the exterior algebra , its envelope is the smallest subspace such that is in the subspace . Alternatively, is spanned by the vectors that can be written as the tensor contraction of with an element of .For example, the envelope of in is , and the envelope of in is all of .

Exterior algebra

Exterior algebra is the algebra of the wedge product, also called an alternating algebra or Grassmann algebra. The study of exterior algebra is also called Ausdehnungslehre or extensions calculus. Exterior algebras are graded algebras.In particular, the exterior algebra of a vector space is the direct sum over in the natural numbers of the vector spaces of alternating differential k-forms on that vector space. The product on this algebra is then the wedge product of forms. The exterior algebra for a vector space is constructed by forming monomials , , , etc., where , , , , , and are vectors in and is wedge product. The sums formed from linear combinations of the monomials are the elements of an exterior algebra.The exterior algebra of a vector space can also bedescribed as a quotient vector space,(1)where is the subspace of -tensors generated by transpositions such as and denotes the vector space tensor product. The equivalence class is denoted..

Clifford algebra

Let be an -dimensional linear space over a field , and let be a quadratic form on . A Clifford algebra is then defined over , where is the tensor algebra over and is a particular ideal of .Clifford algebraists call their higher dimensional numbers hypercomplex even though they do not share all the properties of complex numbers and no classical function theory can be constructed over them.When is Euclidean space, the Clifford algebra is generated by the standard basis vectors with the relations(1)(2)for . The standard Clifford algebra is then generated additively by elements of the form , where , and so the dimension is , where is the dimension of .The defining relation in the general case with vectors is(3)where denotes the quadratic form, or equivalently,(4)where is the symmetric bilinear form associated with .Clifford algebras are associative but not commutative.When , the Clifford algebra becomes exterior algebra.Clifford algebras are..


An ideal is a subset of elements in a ring that forms an additive group and has the property that, whenever belongs to and belongs to , then and belong to . For example, the set of even integers is an ideal in the ring of integers . Given an ideal , it is possible to define a quotient ring . Ideals are commonly denoted using a Gothic typeface.A finitely generated ideal is generated by a finite list , , ..., and contains all elements of the form , where the coefficients are arbitrary elements of the ring. The list of generators is not unique, for instance in the integers.In a number ring, ideals can be represented as lattices, and can be given a finite basis of algebraic integers which generates the ideal additively. Any two bases for the same lattice are equivalent. Ideals have multiplication, and this is basically the Kronecker product of the two bases. The illustration above shows an ideal in the Gaussian integers generated by 2 and , where elements of the ideal..

Exact sequence

An exact sequence is a sequence of maps(1)between a sequence of spaces , which satisfies(2)where denotes the image and the group kernel. That is, for , iff for some . It follows that . The notion of exact sequence makes sense when the spaces are groups, modules, chain complexes, or sheaves. The notation for the maps may be suppressed and the sequence written on a single line as(3)An exact sequence may be of either finite or infinite length. The special case of length five,(4)beginning and ending with zero, meaning the zero module , is called a short exact sequence. An infinite exact sequence is called a long exact sequence. For example, the sequence where and is given by multiplying by 2,(5)is a long exact sequence because at each stage the kernel and image are equal to the subgroup .Special information is conveyed when one of the spaces is the zero module. For instance, the sequence(6)is exact iff the map is injective. Similarly,(7)is exact iff the map..

Steiner system

A Steiner system is a set of points, and a collection of subsets of of size (called blocks), such that any points of are in exactly one of the blocks. The special case and corresponds to a so-called Steiner triple system. For a projective plane, , , , and the blocks are simply lines.The number of blocks containing a point in a Steiner system is independent of the point. In fact,where is a binomial coefficient. The total number of blocks is also determined and is given byThese numbers also satisfy and .The permutations of the points preserving the blocks of a Steiner system is the automorphism group of . For example, consider the set of 9 points in the two-dimensional vector space over the field over three elements. The blocks are the 12 lines of the form , which have three elements each. The system is a because any two points uniquely determine a line.The automorphism group of a Steiner system is the affine group which preserves the lines. For a vector space of..

Harmonic conjugate function

The harmonic conjugate to a given function is a function such thatis complex differentiable (i.e., satisfies the Cauchy-Riemann equations). It is given bywhere , , and is a constant of integration.Note that is a closed form since is harmonic, . The line integral is well-defined on a simply connected domain because it is closed. However, on a domain which is not simply connected (such as the punctured disk), the harmonic conjugate may not exist.

Vector space projection

If is a -dimensional subspace of a vector space with inner product , then it is possible to project vectors from to . The most familiar projection is when is the x-axis in the plane. In this case, is the projection. This projection is an orthogonal projection.If the subspace has an orthonormal basis thenis the orthogonal projection onto . Any vector can be written uniquely as , where and is in the orthogonal subspace .A projection is always a linear transformation and can be represented by a projection matrix. In addition, for any projection, there is an inner product for which it is an orthogonal projection.

Invertible linear map

An invertible linear transformation is a map between vector spaces and with an inverse map which is also a linear transformation. When is given by matrix multiplication, i.e., , then is invertible iff is a nonsingular matrix. Note that the dimensions of and must be the same.

Vector space orientation

An ordered vector basis for a finite-dimensional vector space defines an orientation. Another basis gives the same orientation if the matrix has a positive determinant, in which case the basis is called oriented.Any vector space has two possible orientations since the determinant of an nonsingular matrix is either positive or negative. For example, in , is one orientation and is the other orientation. In three dimensions, the cross product uses the right-hand rule by convention, reflecting the use of the canonical orientation as .An orientation can be given by a nonzero element in the top exterior power of , i.e., . For example, gives the canonical orientation on and gives the other orientation.Some special vector space structures imply an orientation. For example, if is a symplectic form on , of dimension , then gives an orientation. Also, if is a complex vector space, then as a real vector space of dimension , the complex structure gives an orientation...

Orthogonal set

A subset of a vector space , with the inner product , is called orthogonal if when . That is, the vectors are mutually perpendicular.Note that there is no restriction on the lengths of the vectors. If the vectors inan orthogonal set all have length one, then they are orthonormal.The notion of orthogonal makes sense for an abstract vector space over any field as long as there is a symmetric quadratic form. The usual orthogonal sets and groups in Euclidean space can be generalized, with applications to special relativity, differential geometry, and abstract algebra.

Hermitian inner product

A Hermitian inner product on a complex vector space is a complex-valued bilinear form on which is antilinear in the second slot, and is positive definite. That is, it satisfies the following properties, where denotes the complex conjugate of . 1. 2. 3. 4. 5. 6. , with equality only if The basic example is the form(1)on , where and . Note that by writing , it is possible to consider , in which case is the Euclidean inner product and is a nondegenerate alternating bilinear form, i.e., a symplectic form. Explicitly, in , the standard Hermitian form is expressed below.(2)A generic Hermitian inner product has its real part symmetric positive definite, and its imaginary part symplectic by properties 5 and 6. A matrix defines an antilinear form, satisfying 1-5, by iff is a Hermitian matrix. It is positive definite (satisfying 6) when is a positive definite matrix. In matrix form,(3)and the canonical Hermitian inner product is when is the identity matrix...

Lorentzian inner product

The standard Lorentzian inner product on is given by(1)i.e., for vectors and ,(2) endowed with the metric tensor induced by the above Lorentzian inner product is known as Minkowski space and is denoted .The Lorentzian inner product on is nothing more than a specific case of the more general Lorentzian inner product on -dimensional Lorentzian space with metric signature : In this more general environment, the inner product of two vectors and has the form(3)The Lorentzian inner product of two such vectors is sometimes denoted to avoid the possible confusion of the angled brackets with the standard Euclidean inner product (Ratcliffe 2006). Analogous presentations can be made if the equivalent metric signature (i.e., for Minkowski space) is used.The four-dimensional Lorentzian inner product is used as a tool in special relativity, namely as a measurement which is independent of reference frame and which replaces the typical Euclidean notion..

Orthonormal basis

A subset of a vector space , with the inner product , is called orthonormal if when . That is, the vectors are mutually perpendicular. Moreover, they are all required to have length one: .An orthonormal set must be linearly independent, and so it is a vector basis for the space it spans. Such a basis is called an orthonormal basis.The simplest example of an orthonormal basis is the standard basis for Euclidean space . The vector is the vector with all 0s except for a 1 in the th coordinate. For example, . A rotation (or flip) through the origin will send an orthonormal set to another orthonormal set. In fact, given any orthonormal basis, there is a rotation, or rotation combined with a flip, which will send the orthonormal basis to the standard basis. These are precisely the transformations which preserve the inner product, and are called orthogonal transformations.Usually when one needs a basis to do calculations, it is convenient to use an orthonormal..

Alternating multilinear form

An alternating multilinear form on a real vector space is a multilinear form(1)such that(2)for any index . For example,(3)is an alternating form on .An alternating multilinear form is defined on a module in a similar way, by replacing with the ring.

Irreducible representation

An irreducible representation of a group is a group representation that has no nontrivial invariant subspaces. For example, the orthogonal group has an irreducible representation on .Any representation of a finite or semisimple Lie group breaks up into a direct sum of irreducible representations. But in general, this is not the case, e.g., has a representation on by(1)i.e., . But the subspace is fixed, hence is not irreducible, but there is no complementary invariant subspace.The irreducible representation has a number of remarkable properties, as formalized in the group orthogonality theorem. Let the group order of a group be , and the dimension of the th representation (the order of each constituent matrix) be (a positive integer). Let any operation be denoted , and let the th row and th column of the matrix corresponding to a matrix in the th irreducible representation be . The following properties can be derived from the group orthogonality..

Induced representation

If a subgroup of has a group representation , then there is a unique induced representation of on a vector space . The original space is contained in , and in fact,(1)where is a copy of . The induced representation on is denoted .Alternatively, the induced representation is the CG-module(2)Also, it can be viewed as -valued functions on which commute with the action.(3)The induced representation is also determined by its universalproperty:(4)where is any representation of . Also, the induced representation satisfies the following formulas. 1. . 2. for any group representation . 3. when . Some of the group characters of can be calculated from the group characters of , as induced representations, using Frobenius reciprocity. Artin's reciprocity theorem says that the induced representations of cyclic subgroups of a finite group generates a lattice of finite index in the lattice of virtual characters. Brauer's theorem says that the virtual characters..

Group representation restriction

A group representation of a group on a vector space can be restricted to a subgroup . For example, the symmetric group on three letters has a representation on by(1)(2)(3)(4)(5)(6)that can be restricted to the subgroup of group order3,(7)(8)(9)

Group representation

A representation of a group is a group action of on a vector space by invertible linear maps. For example, the group of two elements has a representation by and . A representation is a group homomorphism .Most groups have many different representations, possibly on different vector spaces. For example, the symmetric group has a representation on by(1)where is the permutation symbol of the permutation . It also has a representation on by(2)A representation gives a matrix for each element, and so another representation of is given by the matrices(3)Two representations are considered equivalent if they are similar. For example, performing similarity transformations of the above matrices by(4)gives the following equivalent representation of ,(5)Any representation of can be restricted to a representation of any subgroup , in which case, it is denoted . More surprisingly, any representation on can be extended to a representation of , on a larger..

Orthogonal group

For every dimension , the orthogonal group is the group of orthogonal matrices. These matrices form a group because they are closed under multiplication and taking inverses.Thinking of a matrix as given by coordinate functions, the set of matrices is identified with . The orthogonal matrices are the solutions to the equations(1)where is the identity matrix, which are redundant. Only of these are independent, leaving "free variables." In fact, the orthogonal group is a smooth -dimensional submanifold.Because the orthogonal group is a group and a manifold, it is a Lie group. has a submanifold tangent space at the identity that is the Lie algebra of antisymmetric matrices . In fact, the orthogonal group is a compact Lie group.The determinant of an orthogonal matrix is either 1 or , and so the orthogonal group has two components. The component containing the identity is the special orthogonal group . For example, the group has group action..

Free idempotent monoid

A free idempotent monoid is a monoid that satisfies the identity and is generated by a set of elements. If the generating set of such a monoid is finite, then so is the free idempotent monoid itself. The number of elements in the monoid depends on the size of the generating set, and the size the generating set uniquely determines a free idempotent monoid. On zero letters, the free idempotent monoid has one element (the identity). With one letter, the free idempotent monoid has two elements . With two letters, it has seven elements: . In general, the numbers of elements in the free idempotent monoids on letters are 1, 2, 7, 160, 332381, ... (OEIS A005345). These are given by the analytic expressionwhere is a binomial coefficient. The product can be done analytically, giving the sumin terms of derivatives of the polylogarithm with respect to its index and the Lerch transcendent with respect to its second argument...

Commutative monoid

A monoid that is commutative i.e., a monoid such that for every two elements and in , . This means that commutative monoids are commutative, associative, and have an identity element.For example, the nonnegative integers under addition form a commutative monoid. The integers under the operation with all form a commutative monoid. This monoid collapses to a group only if and are restricted to the integers 0, 1, ..., , since only then do the elements have unique additive inverses. Similarly, the integers under the operation also forms a commutative monoid.The numbers of commutative monoids of orders , 2, ... are 1, 2, 5, 19, 78, 421, 2637, ... (OEIS A058131).


A submonoid is a subset of the elements of a monoid that are themselves a monoid under the same monoid operation. For example, consider the monoid formed by the nonnegative integers under the operation . Then restricting and from all the integers to the set of elements forms a submonoid of the monoid under the operation .


An embedding is a representation of a topological object, manifold, graph, field, etc. in a certain space in such a way that its connectivity or algebraic properties are preserved. For example, a field embedding preserves the algebraic structure of plus and times, an embedding of a topological space preserves open sets, and a graph embedding preserves connectivity.One space is embedded in another space when the properties of restricted to are the same as the properties of . For example, the rationals are embedded in the reals, and the integers are embedded in the rationals. In geometry, the sphere is embedded in as the unit sphere.Let and be structures for the same first-order language , and let be a homomorphism from to . Then is an embedding provided that it is injective (Enderton 1972, Grätzer 1979, Burris and Sankappanavar 1981).For example, if and are partially ordered sets, then an injective monotone mapping may not be an embedding..

Connected sum

The connected sum of -manifolds and is formed by deleting the interiors of -balls in and attaching the resulting punctured manifolds to each other by a homeomorphism , so is required to be interior to and bicollared in to ensure that the connected sum is a manifold.Topologically, if and are pathwise-connected, then the connected sum is independent of the choice of locations on and where the connection is glued.The illustrations above show the connected sums of two tori (top figure) and of two pairs of multi-handled tori.The connected sum of two knots is called a knotsum.

Quotient vector space

Suppose that and . Then the quotient space (read as " mod ") is isomorphic to .In general, when is a subspace of a vector space , the quotient space is the set of equivalence classes where if . By " is equivalent to modulo ," it is meant that for some in , and is another way to say . In particular, the elements of represent . Sometimes the equivalence classes are written as cosets .The quotient space is an abstract vector space, not necessarily isomorphic to a subspace of . However, if has an inner product, then is isomorphic toIn the example above, .Unfortunately, a different choice of inner product can change . Also, in the infinite-dimensional case, it is necessary for to be a closed subspace to realize the isomorphism between and , as well as to ensure the quotient space is a T2-space...

Cup product

The cup product is a product on cohomology classes. In the case of de Rham cohomology, a cohomology class can be represented by a closed form. The cup product of and is represented by the closed form , where is the wedge product of differential forms. It is the dual operation to intersection in homology.In general, the cup product is a mapwhich satisfies , where is the th cohomology group.

Hopf map

The first example discovered of a map from a higher-dimensional sphere to a lower-dimensional sphere which is not null-homotopic. Its discovery was a shock to the mathematical community, since it was believed at the time that all such maps were null-homotopic, by analogy with homology groups.The Hopf map arises in many contexts, and can be generalized to a map . For any point in the sphere, its preimage is a circle in . There are several descriptions of the Hopf map, also called the Hopf fibration.As a submanifold of , the 3-sphere is(1)and the 2-sphere is a submanifold of ,(2)The Hopf map takes points (, , , ) on a 3-sphere to points on a 2-sphere (, , )(3)(4)(5)Every point on the 2-sphere corresponds to a circlecalled the Hopf circle on the 3-sphere.By stereographic projection, the 3-sphere can be mapped to , where the point at infinity corresponds to the north pole. As a map, from , the Hopf map can be pretty complicated. The diagram above shows some of..

Supremum limit

Given a sequence of real numbers , the supremum limit (also called the limit superior or upper limit), written and pronounced 'lim-soup,' is the limit ofas , where denotes the supremum. Note that, by definition, is nonincreasing and so either has a limit or tends to . For example, suppose , then for odd, , and for even, . Another example is , in which case is a constant sequence .When , the sequence converges to the real numberOtherwise, the sequence does not converge.

Cayley graph

Let be a group, and let be a set of group elements such that the identity element . The Cayley graph associated with is then defined as the directed graph having one vertex associated with each group element and directed edges whenever . The Cayley graph may depend on the choice of a generating set, and is connected iff generates (i.e., the set are group generators of ).Care is needed since the term "Cayley graph" is also used when is implicitly understood to be a set of generators for the group, in which case the graph is always connected (but in general, still dependent on the choice of generators). This sort of Cayley graph of a group may be computed in the Wolfram Language using CayleyGraph[G], where the generators used are those returned by GroupGenerators[G].To complicate matters further, undirected versions of proper directed Cayley graphs are also known as Cayley graphs.An undirected Cayley graph of a particular generating set of..

Weyl group

Let be a finite-dimensional split semisimple Lie algebra over a field of field characteristic 0, a splitting Cartan subalgebra, and a weight of in a representation of . Thenis also a weight. Furthermore, the reflections with a root, generate a group of linear transformations in called the Weyl group of relative to , where is the algebraic conjugate space of and is the Q-space spanned by the roots (Jacobson 1979, pp. 112, 117, and 119).The Weyl group acts on the roots of a semisimple Lie algebra, and it is a finite group. The animations above illustrate this action for Weyl Group acting on the roots of a homotopy from one Weyl matrix to the next one (i.e., it slides the arrows from to ) in the first two figures, while the third figure shows the Weyl Group acting on the roots of the Cartan matrix of the infinite family of semisimple lie algebras (cf. Dynkin diagram), which is the special linear Lie algebra, ...

Trivial group

The trivial group, denoted or , sometimes also called the identity group, is the unique (up to isomorphism) group containing exactly one element , the identity element. Examples include the zero group (which is the singleton set with respect to the trivial group structure defined by the addition ), the multiplicative group (where ), the point group , and the integers modulo 1 under addition. When viewed as a permutation group on letters, the trivial group consists of the single element which fixes each letter.The trivial group is (trivially) Abelian and cyclic.The multiplication table for is given below. 111The trivial group has the single conjugacy class and the single subgroup .

Character table

A finite group has a finite number of conjugacy classes and a finite number of distinct irreducible representations. The group character of a group representation is constant on a conjugacy class. Hence, the values of the characters can be written as an array, known as a character table. Typically, the rows are given by the irreducible representations and the columns are given the conjugacy classes.A character table often contains enough information to identify a given abstract group and distinguish it from others. However, there exist nonisomorphic groups which nevertheless have the same character table, for example (the symmetry group of the square) and (the quaternion group).For example, the symmetric group on three letters has three conjugacy classes, represented by the permutations , , and . It also has three irreducible representations; two are one-dimensional and the third is two-dimensional: 1. The trivial representation ...

Tag system

A tag system is set of rules that specifies a fixed number of elements (commonly denoted or ) be removed from the beginning of a sequence and a set of elements to be appended ("tagged" onto the end) based on the elements that were removed from the beginning. For example, consider the tag system shown in the illustration above, in which black represents 1 and white represents 0. Then the starting pattern is "1" and the transition rules are and (Wolfram 2002, p. 93).Tag systems were first considered by Post in 1920 (Wolfram 2002, p. 894), although these results did not become widely known until published much later (Post 1943). Post apparently studied all of a certain type of tag system that involve removal and addition of no more than two elements at each step and concluded that none of them produced complicated behavior. However, looking at rules that remove three elements at each step, he discovered a particular rule..

Algebraic manifold

An algebraic manifold is another name for a smooth algebraic variety. It can be covered by coordinate charts so that the transition functions are given by rational functions. Technically speaking, the coordinate charts should be to all of affine space .For example, the sphere is an algebraic manifold, with a chart given by stereographic projection to , and another chart at , with the transition function given by . In this setting, it is called the Riemann sphere. The torus is also an algebraic manifold, in this setting called an elliptic curve, with charts given by elliptic functions such as the Weierstrass elliptic function.

Abstract manifold

An abstract manifold is a manifold in the context of an abstract space with no particular embedding, or representation in mind. It is a topological space with an atlas of coordinate charts.For example, the sphere can be considered a submanifold of or a quotient space . But as an abstract manifold, it is just a manifold, which can be covered by two coordinate charts and , with the single transition function,defined bywhere . It can also be thought of as two disks glued together at their boundary.

Lower bound

A function is said to have a lower bound if for all in its domain. The greatest lower bound is called the infimum.

Upper bound

A function is said to have a upper bound if for all in its domain. The least upper bound is called the supremum. A set is said to be bounded from above if it has an upper bound.

Kähler form

A closed two-form on a complex manifold which is also the negative imaginary part of a Hermitian metric is called a Kähler form. In this case, is called a Kähler manifold and , the real part of the Hermitian metric, is called a Kähler metric. The Kähler form combines the metric and the complex structure, indeed(1)where is the almost complex structure induced by multiplication by . Since the Kähler form comes from a Hermitian metric, it is preserved by , i.e., since . The equation implies that the metric and the complex structure are related. It gives a Kähler structure, and has many implications.On , the Kähler form can be written as(2)(3)where . In general, the Kähler form can be written in coordinates(4)where is a Hermitian metric, the real part of which is the Kähler metric. Locally, a Kähler form can be written as , where is a function called a Kähler potential. The Kähler form is..

Linear function

A linear function is a function which satisfiesandfor all and in the domain, and all scalars .

Inverse function theorem

Given a smooth function , if the Jacobian is invertible at 0, then there is a neighborhood containing 0 such that is a diffeomorphism. That is, there is a smooth inverse .

Root of unity

The th roots of unity are roots of the cyclotomic equationwhich are known as the de Moivre numbers. The notations , , and , where the value of is understood by context, are variously used to denote the th th root of unity. is always an th root of unity, but is such a root only if is even. In general, the roots of unity form a regular polygon with sides, and each vertex lies on the unit circle.

Ideal radical

The radical of an ideal in a ring is the ideal which is the intersection of all prime ideals containing . Note that any ideal is contained in a maximal ideal, which is always prime. So the radical of an ideal is always at least as big as the original ideal. Naturally, if the ideal is prime then .Another description of the radical isThis explains the connection with the radical symbol. For example, in , consider the ideal of all polynomials with degree at least 2. Then is like a square root of . Notice that the zero set (variety) of and is the same (in because is algebraically closed). Radicals are an important part of the statement of Hilbert's Nullstellensatz.

Quotient ring

A quotient ring (also called a residue-class ring) is a ring that is the quotient of a ring and one of its ideals , denoted . For example, when the ring is (the integers) and the ideal is (multiples of 6), the quotient ring is .In general, a quotient ring is a set of equivalence classes where iff .The quotient ring is an integral domain iff the ideal is prime. A stronger condition occurs when the quotient ring is a field, which corresponds to when the ideal is maximal.The ideals in a quotient ring are in a one-to-one correspondence with ideals in which contain the ideal . In particular, the zero ideal in corresponds to in . In the example above from the integers, the ideal of even integers contains the ideal of the multiples of 6. In the quotient ring, the evens correspond to the ideal in ...

Ideal quotient

The ideal quotient is an analog of division for ideals in a commutative ring ,The ideal quotient is always another ideal.However, this operation is not exactly like division. For example, when is the ring of integers, then , which is nice, while , which is not as nice.

Ideal extension

The extension of , an ideal in commutative ring , in a ring , is the ideal generated by its image under a ring homomorphism . Explicitly, it is any finite sum of the form where is in and is in . Sometimes the extension of is denoted .The image may not be an ideal if is not surjective. For instance, is a ring homomorphism and the image of the even integers is not an ideal since it does not contain any nonconstant polynomials. The extension of the even integers in this case is the set of polynomials with even coefficients.The extension of a prime ideal may not be prime. For example, consider . Then the extension of the even integers is not a prime ideal since .

Ideal contraction

When is a ring homomorphism and is an ideal in , then is an ideal in , called the contraction of and sometimes denoted .The contraction of a prime ideal is always prime. For example, consider . Then the contraction of is the ideal of even integers.

Homogeneous ideal

A homogeneous ideal in a graded ring is an ideal generated by a set of homogeneous elements, i.e., each one is contained in only one of the . For example, the polynomial ring is a graded ring, where . The ideal , i.e., all polynomials with no constant or linear terms, is a homogeneous ideal in . Another homogeneous ideal is in .Given any finite set of polynomials in variables, the process of homogenization converts them to homogeneous polynomials in variables. If is a polynomial of degree thenis the homogenization of . Similarly, if is an ideal in , then is its homogenization and is a homogeneous ideal. For example, if then . Note that in general, if then may have more elements than . However, if , ..., form a Gröbner basis using a graded monomial order, then . A polynomial is easily dehomogenized by setting the extra variable .The affine variety corresponding to a homogeneous ideal has the property that iff for all complex . Therefore, a homogeneous ideal..

Prime ideal

A prime ideal is an ideal such that if , then either or . For example, in the integers, the ideal (i.e., the multiples of ) is prime whenever is a prime number.In any principal ideal domain, prime ideals are generated by prime elements. Prime ideals generalize the concept of primality to more general commutative rings.An ideal is prime iff the quotient ring is an integral domain because iff . Technically, some authors choose not to allow the trivial ring as a commutative ring, in which case they usually require prime ideals to be proper ideals.A maximal ideal is always a prime ideal, but some prime ideals are not maximal. In the integers, is a prime ideal, as it is in any integral domain. Note that this is the exception to the statement that all prime ideals in the integers are generated by prime numbers. While this might seem silly to allow this case, in some rings the structure of the prime ideals, the Zariski topology, is more interesting. For instance, in..

Fractional ideal

A fractional ideal is a generalization of an ideal in a ring . Instead, a fractional ideal is contained in the number field , but has the property that there is an element such that(1)is an ideal in . In particular, every element in can be written as a fraction, with a fixed denominator.(2)Note that the multiplication of two fractional ideals is another fractional ideal.For example, in the field , the set(3)is a fractional ideal because(4)Note that , where(5)and so is an inverse to .Given any fractional ideal there is always a fractional ideal such that . Consequently, the fractional ideals form an Abelian group by multiplication. The principal ideals generate a subgroup , and the quotient group is called the ideal class group.

Maximal ideal

A maximal ideal of a ring is an ideal , not equal to , such that there are no ideals "in between" and . In other words, if is an ideal which contains as a subset, then either or . For example, is a maximal ideal of iff is prime, where is the ring of integers.Only in a local ring is there just one maximal ideal. For instance, in the integers, is a maximal ideal whenever is prime.A maximal ideal is always a prime ideal, and the quotient ring is always a field. In general, not all prime ideals are maximal.

Local ring

A local ring is a ring that contains a single maximal ideal. In this case, the Jacobson radical equals this maximal ideal.One property of a local ring is that the subset is precisely the set of ring units, where is the maximal ideal. This follows because, in a ring, any nonunit belongs to at least one maximal ideal.

Ring spectrum

The spectrum of a ring is the set of proper primeideals,(1)The classical example is the spectrum of polynomialrings. For instance,(2)and(3)The points are, in classical algebraic geometry, algebraic varieties. Note that are maximal ideals, hence also prime.The spectrum of a ring has a topology called the Zariski topology. The closed sets are of the form(4)For example,(5)Every prime ideal is closed except for , whose closure is .

Dedekind ring

A Dedekind ring is a commutative ring in whichthe following hold. 1. It is a Noetherian ring and a integraldomain. 2. It is the set of algebraic integers in itsfield of fractions. 3. Every nonzero prime ideal is also a maximalideal. Of course, in any ring, maximal ideals are always prime. The main example of a Dedekind domain is the ring of algebraic integers in a number field, an extension field of the rational numbers. An important consequence of the above axioms is that every ideal can be written uniquely as a product of prime ideals. This compensates for the possible failure of unique factorization of elements into irreducibles.

Ring of fractions

The extension ring obtained from a commutative unit ring (other than the trivial ring) when allowing division by all non-zero divisors. The ring of fractions of an integral domain is always a field.The term "ring of fractions" is sometimes used to denote any localization of a ring. The ring of fractions in the above meaning is then referred to as the total ring of fractions, and coincides with the localization with respect to the set of all non-zero divisors.When defining addition and multiplication of fractions, all that is required of the denominators is that they be multiplicatively closed, i.e., if , then ,(1)(2)Given a multiplicatively closed set in a ring , the ring of fractions is all elements of the form with and . Of course, it is required that and that fractions of the form and be considered equivalent. With the above definitions of addition and multiplication, this set forms a ring.The original ring may not embed in this ring of..

Ring homomorphism

A ring homomorphism is a map between two rings such that 1. Addition is preserved:, 2. The zero element is mapped to zero: , and 3. Multiplication is preserved: , where the operations on the left-hand side is in and on the right-hand side in . Note that a homomorphism must preserve the additive inverse map because so .A ring homomorphism for unit rings (i.e., rings with a multiplicative identity) satisfies the additional property that one multiplicative identity is mapped to the other, i.e., .

Tensor contraction

The contraction of a tensor is obtained by setting unlike indices equal and summing according to the Einstein summation convention. Contraction reduces the tensor rank by 2. For example, for a second-rank tensor,The contraction operation is invariant under coordinate changes sinceand must therefore be a scalar.When is interpreted as a matrix, the contraction is the same as the trace.Sometimes, two tensors are contracted using an upper index of one tensor and a lower of the other tensor. In this context, contraction occurs after tensor multiplication.


An th-rank tensor in -dimensional space is a mathematical object that has indices and components and obeys certain transformation rules. Each index of a tensor ranges over the number of dimensions of space. However, the dimension of the space is largely irrelevant in most tensor equations (with the notable exception of the contracted Kronecker delta). Tensors are generalizations of scalars (that have no indices), vectors (that have exactly one index), and matrices (that have exactly two indices) to an arbitrary number of indices.Tensors provide a natural and concise mathematical framework for formulating and solving problems in areas of physics such as elasticity, fluid mechanics, and general relativity.The notation for a tensor is similar to that of a matrix (i.e., ), except that a tensor , , , etc., may have an arbitrary number of indices. In addition, a tensor with rank may be of mixed type , consisting of so-called "contravariant"..

Antisymmetric tensor

An antisymmetric (also called alternating) tensor is a tensor which changes sign when two indices are switched. For example, a tensor such that(1)is antisymmetric.The simplest nontrivial antisymmetric tensor is therefore an antisymmetric rank-2 tensor, which satisfies(2)Furthermore, any rank-2 tensor can be written as a sumof symmetric and antisymmetric parts as(3)The antisymmetric part of a tensor is sometimes denoted using the special notation(4)For a general rank- tensor,(5)where is the permutation symbol. Symbols for the symmetric and antisymmetric parts of tensors can be combined, for example(6)(Wald 1984, p. 26).

Bounded variation

A function is said to have bounded variation if, over the closed interval , there exists an such that(1)for all .The space of functions of bounded variation is denoted "BV," and has theseminorm(2)where ranges over all compactly supported functions bounded by and 1. The seminorm is equal to the supremum over all sums above, and is also equal to (when this expression makes sense).On the interval , the function (purple) is of bounded variation, but (red) is not. More generally, a function is locally of bounded variation in a domain if is locally integrable, , and for all open subsets , with compact closure in , and all smooth vector fields compactly supported in ,(3)div denotes divergence and is a constant which only depends on the choice of and .Such functions form the space . They may not be differentiable, but by the Riesz representation theorem, the derivative of a -function is a regular Borel measure . Functions of bounded variation also..

Characteristic function

Given a subset of a larger set, the characteristic function , sometimes also called the indicator function, is the function defined to be identically one on , and is zero elsewhere. Characteristic functions are sometimes denoted using the so-called Iverson bracket, and can be useful descriptive devices since it is easier to say, for example, "the characteristic function of the primes" rather than repeating a given definition. A characteristic function is a special case of a simple function.The term characteristic function is used in a different way in probability, where it is denoted and is defined as the Fourier transform of the probability density function using Fourier transform parameters ,(1)(2)(3)(4)(5)where (sometimes also denoted ) is the th moment about 0 and (Abramowitz and Stegun 1972, p. 928; Morrison 1995).A statistical distribution is not uniquely specified by its moments, but is by its characteristic..

Tensor direct product

Abstractly, the tensor direct product is the same as the vector space tensor product. However, it reflects an approach toward calculation using coordinates, and indices in particular. The notion of tensor product is more algebraic, intrinsic, and abstract. For instance, up to isomorphism, the tensor product is commutative because . Note this does not mean that the tensor product is symmetric.For two first-tensor rank tensors (i.e., vectors), the tensor direct product is defined as(1)which is a second-tensor rank tensor. The tensor contraction of a direct product of first-tensor rank tensors is the scalar(2)For second-tensor rank tensors,(3)(4)In general, the direct product of two tensors is a tensor of rank equal to the sum of the two initial ranks. The direct product is associative, but not commutative.The tensor direct product of two tensors and can be implemented in the Wolfram Language as TensorDirectProduct[a_List, b_List] :=..

Deck transformation

Deck transformations, also called covering transformations, are defined for any cover . They act on by homeomorphisms which preserve the projection . Deck transformations can be defined by lifting paths from a space to its universal cover , which is a simply connected space and is a cover of . Every loop in , say a function on the unit interval with , lifts to a path , which only depends on the choice of , i.e., the starting point in the preimage of . Moreover, the endpoint depends only on the homotopy class of and . Given a point , and , a member of the fundamental group of , a point is defined to be the endpoint of a lift of a path which represents .The deck transformations of a universal cover form a group , which is the fundamental group of the quotient spaceFor example, when is the square torus then is the plane and the preimage is a translation of the integer lattice . Any loop in the torus lifts to a path in the plane, with the endpoints lying in the integer lattice...

Generalized function

The class of all regular sequences of particularly well-behaved functions equivalent to a given regular sequence. A distribution is sometimes also called a "generalized function" or "ideal function." As its name implies, a generalized function is a generalization of the concept of a function. For example, in physics, a baseball being hit by a bat encounters a force from the bat, as a function of time. Since the transfer of momentum from the bat is modeled as taking place at an instant, the force is not actually a function. Instead, it is a multiple of the delta function. The set of distributions contains functions (locally integrable) and Radon measures. Note that the term "distribution" is closely related to statistical distributions.Generalized functions are defined as continuous linear functionals over a space of infinitely differentiable functions such that all continuous functions have derivatives..

Complete riemannian metric

The geodesics in a complete Riemannian metric go on indefinitely, i.e., each geodesic is isometric to the real line. For example, Euclidean space is complete, but the open unit disk is not complete since any geodesic ends at a finite distance. Whether or not a manifold is complete depends on the metric.For instance, the punctured plane is not complete with the usual metric. However, with the Riemannian Metric , the punctured plane is the infinite (flat) cylinder, which is complete. The figure above illustrates a geodesic which can only go a finite distance because it reaches a hole in the punctured plane, exemplifying that the punctured plane with the usual metric is not complete. The path is a geodesic parametrized by arc length.Any metric on a compact manifold is complete. Consequently, the pullback metric on the universal cover of a compact manifold is complete...


Codimension is a term used in a number of algebraic and geometric contexts to indicate the difference between the dimension of certain objects and the dimension of a smaller object contained in it. This rough definition applies to vector spaces (the codimension of the subspace in is ) and to topological spaces (with respect to the Euclidean topology and the Zariski topology, the codimension of a sphere in is ).The first example is a particular case of the formula(1)which gives the codimension of a subspace of a finite-dimensional abstract vector space . The second example has an algebraic counterpart in ring theory. A sphere in the three-dimensional real Euclidean space is defined by the following equation in Cartesian coordinates(2)where the point is the center and is the radius. The Krull dimension of the polynomial ring is 3, the Krull dimension of the quotient ring(3)is 2, and the difference is also called the codimension of the ideal(4)According..

Wedge product

The wedge product is the product in an exterior algebra. If and are differential k-forms of degrees and , respectively, then(1)It is not (in general) commutative, but it is associative,(2)and bilinear(3)(4)(Spivak 1999, p. 203), where and are constants. The exterior algebra is generated by elements of degree one, and so the wedge product can be defined using a basis for :(5)when the indices are distinct, and the product is zero otherwise.While the formula holds when has degree one, it does not hold in general. For example, consider :(6)(7)(8)If have degree one, then they are linearly independent iff .The wedge product is the "correct" type of product to use in computinga volume element(9)The wedge product can therefore be used to calculate determinants and volumes of parallelepipeds. For example, write where are the columns of . Then(10)and is the volume of the parallelepiped spanned by ...

Interior product

The interior product is a dual notion of the wedge product in an exterior algebra , where is a vector space. Given an orthonormal basis of , the forms(1)are an orthonormal basis for . They define a metric on the exterior algebra, . The interior product with a form is the adjoint of the wedge product with . That is,(2)for all . For example,(3)and(4)where the are orthonormal, are two interior products.An inner product on gives an isomorphism with the dual vector space . The interior product is the composition of this isomorphism with tensor contraction.

Exterior power

The th exterior power of an element in an exterior algebra is given by the wedge product of with itself times. Note that if has odd degree, then any higher power of must be zero. The situation for even degree forms is different. For example, if(1)then(2)(3)(4)


The comass of a differential p-form is the largest value of on a vector of -volume one,

Christoffel symbol

The Christoffel symbols are tensor-like objects derived from a Riemannian metric . They are used to study the geometry of the metric and appear, for example, in the geodesic equation. There are two closely related kinds of Christoffel symbols, the first kind , and the second kind . Christoffel symbols of the second kind are also known as affine connections (Weinberg 1972, p. 71) or connection coefficients (Misner et al. 1973, p. 210).It is always possible to pick a coordinate system on a Riemannian manifold such that the Christoffel symbol vanishes at a chosen point. In general relativity, Christoffel symbols are "gravitational forces," and the preferred coordinate system referred to above would be one attached to a body in free fall.

Calibration form

A calibration form on a Riemannian manifold is a differential p-form such that 1. is a closed form. 2. The comass of ,(1)defined as the largest value of on a vector of -volume one, equals 1. A -dimensional submanifold is calibrated when restricts to give the volume form.It is not hard to see that a calibrated submanifold minimizes its volume among objects in its homology class. By Stokes' theorem, if represents the same homology class, then(2)Since(3)and(4)it follows that the volume of is less than or equal to the volume of .A simple example is on the plane, for which the lines are calibrated submanifolds. In fact, in this example, the calibrated submanifolds give a foliation. On a Kähler manifold, the Kähler form is a calibration form, which is indecomposable. For example, on(5)the Kähler form is(6)On a Kähler manifold, the calibrated submanifolds are precisely the complex submanifolds. Consequently, the complex submanifolds..

Holonomy group

On a Riemannian manifold , tangent vectors can be moved along a path by parallel transport, which preserves vector addition and scalar multiplication. So a closed loop at a base point , gives rise to a invertible linear map of , the tangent vectors at . It is possible to compose closed loops by following one after the other, and to invert them by going backwards. Hence, the set of linear transformations arising from parallel transport along closed loops is a group, called the holonomy group.Since parallel transport preserves the Riemannian metric, the holonomy group is contained in the orthogonal group . Moreover, if the manifold is orientable, then it is contained in the special orthogonal group. A generic Riemannian metric on an orientable manifold has holonomy group , but for some special metrics it can be a subgroup, in which case the manifold is said to have special holonomy.A Kähler manifold is a -dimensional manifold whose holonomy lies..

Hermitian metric

A Hermitian metric on a complex vector bundle assigns a Hermitian inner product to every fiber bundle. The basic example is the trivial bundle , where is an open set in . Then a positive definite Hermitian matrix defines a Hermitian metric bywhere is the complex conjugate of . By a partition of unity, any complex vector bundle has a Hermitian metric.In the special case of a complex manifold, the complexified tangent bundle may have a Hermitian metric, in which case its real part is a Riemannian metric and its imaginary part is a nondegenerate alternating multilinear form . When is closed, i.e., in this case a symplectic form, then is a Kähler form.On a holomorphic vector bundle with a Hermitian metric , there is a unique connection compatible with and the complex structure. Namely, it must be , where in a trivialization...

Gauge theory

Gauge theory studies principal bundle connections, called gauge fields, on a principal bundle. These connections correspond to fields, in physics, such as an electromagnetic field, and the Lie group of the principal bundle corresponds to the symmetries of the physical system. The base manifold to the principal bundle is usually a four-dimensional manifold which corresponds to space-time. In the case of an electromagnetic field, the symmetry group is the unitary group . The other two groups that arise in physical theories are the special unitary groups and . Also, a group representation of the symmetry group, called internal space, gives rise to an associated vector bundle.Actually,the principal bundle connections which minimize an energy functional are the only ones of physical interest. For example, the Yang-Mills connections minimize the Yang-Mills functional. These connections are useful in low-dimensional topology. In fact,..

Differential ideal

A differential ideal on a manifold is an ideal in the exterior algebra of differential k-forms on which is also closed under the exterior derivative . That is, for any differential -form and any form , then 1. , and 2. For example, is a differential ideal on .A smooth map is called an integral of if the pullback map of all forms in vanish on , i.e., .


A metric space is isometric to a metric space if there is a bijection between and that preserves distances. That is, . In the context of Riemannian geometry, two manifolds and are isometric if there is a diffeomorphism such that the Riemannian metric from one pulls back to the metric on the other. Since the geodesics define a distance, a Riemannian metric makes the manifold a metric space. An isometry between Riemannian manifolds is also an isometry between the two manifolds, considered as metric spaces.Isometric spaces are considered isomorphic. For instance, the circle of radius one around the origin is isometric to the circle of radius one around .

Radius of convergence

A power series will converge only for certain values of . For instance, converges for . In general, there is always an interval in which a power series converges, and the number is called the radius of convergence (while the interval itself is called the interval of convergence). The quantity is called the radius of convergence because, in the case of a power series with complex coefficients, the values of with form an open disk with radius .A power series always converges absolutely within its radius of convergence. This can be seen by fixing and supposing that there exists a subsequence such that is unbounded. Then the power series does not converge (in fact, the terms are unbounded) because it fails the limit test. Therefore, for with , the power series does not converge, where(1)(2)and denotes the supremum limit.Conversely, suppose that . Then for any radius with , the terms satisfy(3)for large enough (depending on ). It is sufficient to fix a value..

Limit test

The limit test, also sometimes known as the th term test, says that if or this limit does not exist as tends to infinity, then the series does not converge. For example, does not converge by the limit test. The limit test is inconclusive when the limit is zero.

Elliptic partial differential equation

A second-order partial differential equation,i.e., one of the form(1)is called elliptic if the matrix(2)is positive definite. Elliptic partial differential equations have applications in almost all areas of mathematics, from harmonic analysis to geometry to Lie theory, as well as numerous applications in physics. As with a general PDE, elliptic PDE may have non-constant coefficients and be non-linear. Despite this variety, the elliptic equations have a well-developed theory.The basic example of an elliptic partial differential equation is Laplace'sequation(3)in -dimensional Euclidean space, where the Laplacian is defined by(4)Other examples of elliptic equations include the nonhomogeneous Poisson'sequation(5)and the non-linear minimal surface equation.For an elliptic partial differential equation, boundary conditions are used to give the constraint on , where(6)holds in .One property of constant coefficient elliptic..

Complex differentiable

Let and on some region containing the point . If satisfies the Cauchy-Riemann equations and has continuous first partial derivatives in the neighborhood of , then exists and is given byand the function is said to be complex differentiable (or, equivalently, analyticor holomorphic).A function can be thought of as a map from the plane to the plane, . Then is complex differentiable iff its Jacobian is of the format every point. That is, its derivative is given by the multiplication of a complex number . For instance, the function , where is the complex conjugate, is not complex differentiable.

Schwarz reflection principle

Suppose that is an analytic function which is defined in the upper half-disk . Further suppose that extends to a continuous function on the real axis, and takes on real values on the real axis. Then can be extended to an analytic function on the whole disk by the formulaand the values for reflected across the real axis are the reflections of across the real axis. It is easy to check that the above function is complex differentiable in the interior of the lower half-disk. What is remarkable is that the resulting function must be analytic along the real axis as well, despite no assumptions of differentiability.This is called the Schwarz reflection principle, and is sometimes also known as Schwarz's symmetric principle (Needham 2000, p. 257). The diagram above shows the reflection principle applied to a function defined for the upper half-disk (left figure; red) and its image (right figure; red). The function is real on the real axis, so it is possible..

Cantor's intersection theorem

A theorem about (or providing an equivalent definition of) compact sets, originally due to Georg Cantor. Given a decreasing sequence of bounded nonempty closed setsin the real numbers, then Cantor's intersection theorem states that there must exist a point in their intersection, for all . For example, . It is also true in higher dimensions of Euclidean space.Note that the hypotheses stated above are crucial. The infinite intersection of open intervals may be empty, for instance . Also, the infinite intersection of unbounded closed sets may be empty, e.g., .Cantor's intersection theorem is closely related to the Heine-Borel theorem and Bolzano-Weierstrass theorem, each of which can be easily derived from either of the other two. It can be used to show that the Cantor set is nonempty.

Check the price
for your project
we accept
Money back
100% quality