 # Information theory

## Information theory Topics

Sort by:

### Shannon capacity

Let denote the independence number of a graph . Then the Shannon capacity , sometimes also denoted , of is defined aswhere denoted the graph strong product (Shannon 1956, Alon and Lubetzky 2006). The Shannon capacity is an important information theoretical parameter because it represents the effective size of an alphabet in a communication model represented by a graph (Alon 1998). satisfiesThe Shannon capacity is in general very difficult to calculate (Brimkov et al. 2000). In fact, the Shannon capacity of the cycle graph was not determined as until 1979 (Lovász 1979), and the Shannon capacity of is perhaps one of the most notorious open problems in extremal combinatorics (Bohman 2003).Lovász (1979) showed that the Shannon capacity of the -Kneser graph is , that of a vertex-transitive self-complementary graph (which includes all Paley graphs) is , and that of the Petersen graph is 4.All graphs whose Shannon capacity is known..

### Concept

In machine learning theory and artificial intelligence, a concept over a domain is a Boolean function . A collection of concepts is called a concept class.In context-specific applications, concepts are usually thought to assign either a "positive" or "negative" outcome (corresponding to range values of 1 or 0, respectively) to each element of the domain . In that way, concepts are the fundamental component of learning theory.

### Sampling theorem

In order for a band-limited (i.e., one with a zero power spectrum for frequencies ) baseband () signal to be reconstructed fully, it must be sampled at a rate . A signal sampled at is said to be Nyquist sampled, and is called the Nyquist frequency. No information is lost if a signal is sampled at the Nyquist frequency, and no additional information is gained by sampling faster than this rate.

### Mutual information

The mutual information between two discrete random variables and is defined to be(1)bits. Additional properties are(2)(3)and(4)where is the entropy of the random variable and is the joint entropy of these variables.

### Asymptotic equipartition property

A theorem from information theory that is a simple consequence of the weak law of large numbers. It states that if a set of values , , ..., is drawn independently from a random variable distributed according to , then the joint probability satisfieswhere is the entropy of the random variable .

### Sampling

In statistics, sampling is the selection and implementation of statistical observations in order to estimate properties of an underlying population. Sampling is a vital part of modern polling, market research, and manufacturing, and its proper use is vital in the functioning of modern economies. The portion of a population selected for analysis is known as a sample, and the number of members in the sample is called the sample size.The term "sampling" is also used in signal processing to refer to measurement of a signal at discrete times, usually with the intension of reconstructing the original signal. For infinite-precision sampling of a band-limited signal at the Nyquist frequency, the signal-to-noise ratio after samples is(1)(2)(3)where is the normalized correlation coefficient(4)For ,(5)The identical result is obtained for oversampling. For undersampling, the signal-to-noiseratio decreases (Thompson et al. 1986)...

### Kolmogorov entropy

Also known as metric entropy. Divide phase space into -dimensional hypercubes of content . Let be the probability that a trajectory is in hypercube at , at , at , etc. Then define(1)where is the information needed to predict which hypercube the trajectory will be in at given trajectories up to . The Kolmogorov entropy is then defined by(2)The Kolmogorov entropy is related to Lyapunovcharacteristic exponents by(3)

### Relative entropy

Let a discrete distribution have probability function , and let a second discrete distribution have probability function . Then the relative entropy of with respect to , also called the Kullback-Leibler distance, is defined byAlthough , so relative entropy is therefore not a true metric, it satisfies many important mathematical properties. For example, it is a convex function of , is always nonnegative, and equals zero only if .Relative entropy is a very important concept in quantum information theory, as well as statistical mechanics (Qian 2000).

### Quantization efficiency

Quantization is a nonlinear process which generates additional frequency components (Thompson et al. 1986). This means that the signal is no longer band-limited, so the sampling theorem no longer holds. If a signal is sampled at the Nyquist frequency, information will be lost. Therefore, sampling faster than the Nyquist frequency results in detection of more of the signal and a lower signal-to-noise ratio [SNR]. Let be the oversampling ratio and defineThen the following table gives values of for a number of parameters.quantization levels20.640.7430.810.8940.880.94The Very Large Array of 27 radio telescopes in Socorro, New Mexico uses three-level quantization at , so .

### Shattered set

Let be a set and a collection of subsets of . A subset is shattered by if each subset of can be expressed as the intersection of with a subset in . Symbolically, then, is shattered by if for all , there exists some for which .If is shattered by , one says that shatters .There are a number of equivalent ways to define shattering. One can easily verify that the above definition is equivalent to saying that shatters ifwhere denotes the power set of . Yet another way to express this concept is to say that a set of cardinality is shattered by a set if where here,In the field of machine learning theory, one usually considers the set to be a sample of outcomes drawn according to a distribution with the set representing a collection of "known" concepts or laws. In this context, saying that is shattered by intuitively means that all of the constituent outcomes in can be known by knowing only the laws in ...